The computers talk to us now
It’s very easy to get lost in the details. How does new agent framework X perform on coding benchmark Y? Should you use the biggest frontier model for your tasks and have a smaller model call in the big model as an advisor? What’s the deal with the goblins?
We live in a world of ever increasing noise and more and more different threads pulling at our attention. I know this because not only am I uniquely susceptible to having my attention pulled, but I also spend a lot of my time thinking about how to get your attention, dear reader. And I do that with the noblest of goals, of course. To inform you on the state of analytics engineering. To tell you about the latest in data agent benchmarking.
But you multiply this by 100, 1,000, 10,000 and it becomes kind of hard to focus on the big picture.
And the big picture is that the computers talk to us now!! Beyond talking to us, they’re doing real and useful work. Writing code, answering data questions, connecting our systems together. They’re getting better rapidly.
If you go back even five years, basically everyone would have told you that we’re decades away from being able to have a real conversation with a computer. To ask it to write usable code. To answer questions for us.
But we’re here and it’s happening! I know it probably sounds like I have some deeper point that I’m getting to, but actually the most important thing is the obvious thing that’s staring us straight in the face. The computers talk to us now!
It’s easy for me to get lost in the day to day - in fact the title of this roundup is taken from a conversation I had with a friend recently. I asked him how his new job was going. He just looked at me, paused for about five seconds and said “Jason, the computers talk to you now”. And then repeated it about five times. Was it a little annoying? Yes. But it got the point across - most people, even though we are talking about AI ad nauseam are not fully absorbing the weight of the fact that the computers talk to us now.
Our systems were not built for a world in which computers talk to us
I was reminded of how fast this has happened the other day.
We have been working on updating a piece of dbt internals that has not been seriously revisited in a couple of years. Drew Banin originally wrote some of it back in 2020. The code does what code does. It is well-considered. It has been quietly running, somewhere in someone’s project, every single day since.
The thing that struck me was the design lens it was written through. In 2020, Drew was thinking about how a human analyst would interact with that code. Where the documentation would land. How the affordances would feel. What the error messages should say to a developer staring at a terminal at 4pm on a Tuesday. That is the right design lens for 2020. It is the only design lens that makes any sense in 2020.
And now, in the same span of time it took us to circle back and revisit it, the audience has expanded. We are still designing it for humans, of course. But we are also designing it, very seriously, for robots. For agents reading the docstrings as part of their context window. For coding tools generating dbt projects on the fly. For natural-language interfaces translating English into configuration and configuration back into English. The set of entities, human and otherwise, who interact with that code has grown in a way that was not on anybody’s whiteboard the first time around.
The joke I have been making about this, which is not really a joke, is that it took us long enough to come back to that file that we crossed an entire era while it sat there. Drew wrote it for analysts. We are revisiting it for the computers that can talk to us now.
That is a small story about a small piece of code at one company. It is the same story playing out, all at once, in approximately every serious software project in the world, on a timescale none of us have lived through before.
“But it doesn’t really understand you”
You can argue with the framing in any number of ways, and people do, all the time. I have sat with the objections long enough to want to walk through them, because dispatching them is part of the point.
You could say: well, SQL is a language for talking to computers. We have been talking to computers for decades. True. The conversation has just been a bit one-sided. The computer understood SELECT, and you understood that it understood SELECT, and that was the entire vocabulary. The new thing is not that we have a way to address the machine. It is that the machine has acquired something close to general comprehension of what we mean.
You could say: they are not really talking. They are doing very fancy averaging over a very large pile of matrix multiplications. Also true, and also not the most interesting frame. The flight is real even if you can describe the wing in terms of differential equations. Decomposing what the model does into linear algebra does not change what it does on the other side of the screen, which is to read your data, write the code, take the action, and answer the question.
You could say: the demos are misleading and the systems break in production. I want to take this one more seriously, because it is partially correct, and it is the form of skepticism that has aged best. We have spent a lot of time in the Roundup walking through where the systems still fail, where the assumptions go quietly wrong, where the correctness boundary sits on the other side of someone’s tacit knowledge. All of that is true. None of it changes the underlying fact. The systems are not perfect. They are also not the point. The point is that they are here, that they are useful enough to use, and that the slope they are on is steep.
I can hold all of those caveats in my head and arrive in the same place. You can, today, ask a computer to do real, substantial, previously-skilled work on your data, and it will do a meaningful version of it, and the version it does will be better in a month than it is today. That is the level at which the change is operating. And that is the level at which I think it deserves to be discussed once in a while.
And to feel the awe associated with that.
Living in Future Shock
What I keep landing on, when I let myself sit with the larger zoom out, is that we are living inside something close to a continuous state of future shock, and we have started to treat that as normal.
Future shock was Alvin Toffler’s term for the disorientation that happens when too much change arrives too quickly for the social and psychological structures around it to absorb. He was writing in 1970, worrying about color television and the moon landing, which is funny in retrospect. The underlying observation holds, though. There is a speed at which change can arrive that exceeds the human bandwidth for digesting it. And when you exceed that bandwidth, you do not feel awe and you do not feel terror. You feel a low-grade numbness you have to actively wake yourself up out of.
I think a lot of us are walking around inside that numbness right now. The computers talk to us now is so true and so strange that it has stopped reading as strange. You hear someone show off a working agent doing previously-skilled work and you nod and ask follow-up questions about cost. You watch a finance partner sit down with Claude Code and write a week’s worth of models in an afternoon, and the most surprising thing about the conversation is that you are not surprised. You read a sentence like “the computers talk to you now” and you have to be reminded, by a friend who does not work in data, that it is a sentence worth pausing on.
There is something a little funny about how casually we have all absorbed a fact that, six years ago, would have been the lead story in every newspaper for a week. That casualness is not a moral failing. It is what humans do. We adapt, fast, and that’s great.
But also we remember. And we should remember this is highly unusual.
This is going to change how we work, how organizations work, how the economy works
The contingency runs through every conversation I have these days. The career trajectory of an analytics engineer entering the field in 2026 is not the trajectory of one who entered in 2018. The shape of a data team in five years will not be the shape of a data team today, and I know this because the shape of a data team today is not the shape of one I worked on five years ago. The documentation we wrote for ourselves two years ago is now being used to drive measurable increases in agent efficiency.
None of that is a forecast. It is a description of the floor under our feet. And I think the people who do their best work in the next stretch are going to be the ones who can continue to feel the weirdness and awe that these systems should evoke, while at the same time approaching them rigorously, methodically and empirically. The frameworks will change. The models will change. The agent harnesses will change. The fact that the computers talk to us now will not change. It will only become more true.
Jason

