The Chief Analytics Officer of Mode recently highlighted the rise of increasingly complex and fragile data stacks that often don’t align with how companies actually use data.”
In his words, “because of the implicit technical divide (between technical and non-technical users) in the industry, data tools are almost always designed and sold for one audience or the other.”
It may be time to rethink this divide. Increasingly, analysts are expected to combine traditional business acumen with the technical expertise of data engineers.
As Benn Stancil points out, embedding analytics within engineering teams often implies that analysts must also be engineers. But in reality, the most valuable contributions analysts make are often not technical—they’re rooted in critical thinking, business context, and communication.
However, is this the right way to leverage the talent of a data analyst? He adds, “But this can be terribly wrong, as he argues that by and large, the hardest and most important problems analysts work on aren’t technical, or even mathematical.”
One emerging solution is the rise of the ‘analytics engineer’—a role bridging analytics and engineering. This hybrid role enables analysts to focus on critical thinking, while analytics engineers provide the technical expertise to manage infrastructure and pipelines.
Too often, tools and platforms are built around narrow roles instead of serving the broader DataOps ecosystem. When teams adopt fragmented tools designed for specialists, they risk creating silos that complicate data flows and distract from business outcomes.
When the segmentation of roles, responsibilities and skills becomes the foundation of which tools and systems are used, we run the risk of losing sight of data flows and overall DataOps ecosystem. Instead of being goal-oriented, teams can easily become tool oriented.
Specialization can be valuable—but when it’s driven by poor tools, it fragments the data ecosystem. Instead of accelerating insights, teams become bogged down by complexity and tool-specific expertise.
What’s the way forward?
Instead of splitting by functionality, the data stack should be built based on how data is consumed. Stencil suggests that “the modern data stack doesn’t need a BI bucket and a data science bucket; it needs a unified consumption layer. To do our job well, we have to overcome the technical division, not be defined by it. Analytical needs don’t end at the code’s edge.”
There are many potential dangers when overcomplicating the data stack with too many tools and platforms. Orchestrating and syncing them is only part of the problem. Running and maintaining a data-stack which isn’t built with data usage and data flows at its core means they are harder to manage and operate – with very few people getting a holistic view of how the data stack is run.
Over time, this multi-layered complexity risks doing the opposite of what it was intended to do—making data harder to manage, understand, and act on. To unlock true value, organizations need ecosystems built on simplicity, usability, and alignment with business goals.