BI’s Second Unbundling
Software engineering started eating data a decade+ ago; it just took a little while to get to the presentation layer.
Have you built a dashboard, or even just a chart, in Claude Code? I have. And I’m not alone. It is turning out to be an incredibly common modality.
Anthropic recently launched live artifacts in Claude. You can now build a working, data-connected dashboard in a 15-minute conversation, no BI tool required.
Leave aside, for a second, the correctness part—there has been plenty of ink spilled on the topic of creating trustworthy analytical outputs from an LLM—and let’s just focus on the analytical workflow for a second.
The ‘create charts in Claude’ workflow is better in a bunch of ways than the ‘hit lots of buttons in a chart builder inside of a BI tool’ workflow, but it’s clearly still in its infancy. Very few companies (although I’ve spoken to a couple!) are deciding to trash their BI tool in favor of a purely vibe-coded approach.
So the question I want to ask is: what direction is this headed in? What should “next generation BI” look like? (Is there even such a thing as a BI tool in this world?)
Let’s start with the historical perspective.
BI has always been a bundle
When I started in data, BI tools were full-stack. Everything happened inside one product: data ingestion, transformation, compute, caching, semantics, visualization, identity. The BI tool was the data stack. MicroStrategy, Cognos, etc: they’re not just visualization tools, they’re integrated data platforms.
Then the modern data stack happened. From ~2015 to 2022, the infrastructure layers of that BI bundle got pulled out and turned into purpose-built infrastructure. Compute went to the Big 5. Ingestion went to Fivetran. Transformation went to dbt. The BI tool was left with: visualization, interactive analytical interfaces, semantic definitions (sometimes!), identity and access management, and web hosting. You could probably squint and see a few more, but I think the simplification works for our purposes.
That’s still a lot. But it’s much less than it used to be. This was the Copernican Revolution of BI—the universe just doesn’t rotate around it any more.
If 2015-22 was the first unbundling of BI, the second one is happening right now.
A false start, and then a real one
Two companies have been predicting the second unbundling for several years. IMO they were basically-right but early.
Both Evidence.dev and Hashboard have been building “BI as code” for a while. In the briefest possible terms, I would describe both products as SQL plus markdown, version controlled, that deploys like a web app.
To my knowledge, both have gotten only modest traction, despite working on the problem for years. My read is that the authoring environment of a BI tool (point-and-click, immediate visual feedback) was native to the analyst workflow in a way that hand-coding YAML simply was not. Defining dashboards in code is vulnerable to the same problem as designing charts in Matplotlib; there’s just something really weird and unpleasant about writing 20 lines of config to make a scatter plot.
But it turns out that, maybe, both tools were just ahead of their moment. Because now, the front end of everything is turning into coding agents.
Analysts are shifting left. More and more, their primary interface is an agentic coding environment, not a drag-and-drop GUI. When you spend your day in Claude Code, “generate me a dashboard YAML file and render it” starts to feel more natural than opening a new tab in your browser and clicking a bunch of buttons.
Reconstituting BI as front end engineering
My read on the technical solution: the next generation of BI looks a lot like the modern frontend ecosystem. Looking at the same components we identified before and how I think they map:
Visualization → React charting libraries
Interactive controls → pluggable React components
Semantic definitions → MCP servers provided by infrastructure vendors: dbt’s MCP server, Snowflake Semantic Views, etc.
Database connectivity → ADBC / Arrow Flight
Hosting → Vercel / Cloudflare
But the availability of these discrete components isn’t enough. Yes, you could plug something together like this in an afternoon. But, as a data analyst, I have zero interest in navigating the React ecosystem, choosing between charting libraries, selecting a date picker component, and all of this other random-crap-that-is-not-about-my-business-problem. Yes I want the power of all of that—assuming it’s all OSS, it would let me have a lot of control I don’t currently have inside of my current-generation BI experiences. But I don’t want to get bogged down at the outset. Just let me get to work!
This is where there is a gap. The tech all exists, and the new usage pattern (agent as front-end) is becoming clear. What’s missing is an integrated, usable solution that has been purpose-built for analysts.
This is what I’ve been thinking a lot about. What does this minimum viable next-generation BI tool look like? I think it includes:
a dashboard format spec, with agent skills on how to write it
dashboard files, declared in YAML, likely living alongside your dbt project
a lightweight renderer that reads that YAML and produces an interactive page
You write the YAML via your agent. The renderer turns it into an interactive experience. This is closer to what Hugo and Jekyll did ~15 years ago. Known as “static site generators”, you use them to write content in a defined format (Markdown files with YAML frontmatter), point a renderer at them, and off you go.
Here’s the format, here’s the renderer, you write the content.
The hard part
There’s one piece of BI that doesn’t collapse easily into the frontend ecosystem: identity.
If you were doing data work in the 2010s, you remember the Jupyter notebook problem. Notebooks were brilliant for analysis and a nightmare for sharing. You’d build something genuinely useful, then spend three times as long figuring out how to get it in front of people who didn’t have a Python VM on their machines, didn’t have their own database credentials, and had no idea what a kernel was. The gap between “I built this” and “other people can use this” was enormous.
BI tools solved that gap. They built permissions models, row-level security, SSO integrations, etc. They gave you a URL you could send to the regional leader and know that she’d see her region’s data, not the whole company’s.
This is non-trivial. It’s not what we typically think of when we think about BI, but it’s a lot of work and it’s genuinely useful. Basically every BI tool has had to build this same functionality.
dbt intentionally punted on this. We said: if you’re doing transformation work, you need credentials in the underlying database, and those credentials define your identity and what you can do. That’s a convenient shortcut at the transformation layer, but it doesn’t work at the presentation layer. Most BI users don’t have an account on the underlying data platform. While there are arguments for why that should change, it’s been this way for multiple decades at this point and I’m not sure we should bank on it shifting.
So: in a world where agents become the front end for development, identity continues to be a persistent moat. I don’t think that necessarily puts current BI tools in a great position to maintain their position. Someone will need to act as the identity provider in this new world but, like many platform shifts, the board has been overturned and we’ll have to see where the pieces land.
==
Software engineering started eating data a decade+ ago; it just took a little while to get to the presentation layer. But, IMO, this will be a change for the better. There are SO many workflow improvements that front end engineers take for granted that will be tremendously helpful in the day-to-day workflow of a data analyst. I’ll have more to say on that in a future issue.
- Tristan
