Agent Skills: Disseminating Expertise
I'll never look at a repo full of markdown files the same way.
A few weeks ago I pointed Claude, equipped with our new migrate-to-fusion skill, at a real, decently-sized dbt Core project running 1.10 and told Claude to do its thing.
It performed the entire migration with zero help from me; Fusion compiled and ran flawlessly.
I sat there for a second after it finished. That skill encodes hundreds, maybe thousands of hours of collective human experience across our team and the community: the edge cases, the config quirks that trip everyone up, the judgment calls about what to deprecate and what to preserve. Things you’d only know if you’d done multiple migrations. All that, now in 12kb of markdown, callable by any agent that supports skills.
And that’s just a drop in the bucket. The rest of the skills we shipped encode a decade of best practices expertise that built up across the entire dbt community over the course of the past decade.
I recognize that this is not a well-formed question but … what does that mean? It feels big, important. We’ve built hundreds of hours of training and certification content, written hundreds or thousands of pages of documentation, all for humans. And certainly, we haven’t replicated the expertise of a human analytics engineer…yet. But it’s a lot more than nothing, too.
I’ve been sitting with that question ever since that migration, and I don’t have a complete answer. But I have some thoughts.
What We Built
Agent skills are bundles of prompts and procedural guidance that AI agents — Claude Code, Cursor, Copilot, Codex, etc. — load dynamically when you ask them to do relevant work. They’re not documentation. They’re not MCP tools. They’re something in between: encoded expertise that an agent can load and apply without you having to explain how to do a task every time you open a new session.
We’ve shipped 8 dbt-related skills so far. I haven’t used them all, but our team has—from solutions architects to resident architects to the DX team that did most of the work to build them—and the overall feedback is that my experience is typical. They work, often shockingly well.
I think that at least part of that is that skills are optimized for a different reader than anything we have written for before. When you’re writing for agents, you can be significantly more declarative (“do this”) whereas when writing for humans you have to preserve a lot more space for individual opinions and tastes. The former, combined with current models’ performance, just produces really excellent results.
What Doesn’t Exist Yet
Eight skills is a start. It’s great, and I’m pumped. But there is certainly a lot more to do. Here are some things we haven’t even scratched the surface on yet:
Development workflow
Code review, dbt Mesh, exposures, metadataData modeling, deeper technicals
Snapshots, Python models, warehouse optimization, open table formatsData modeling best practices
Auditing for consistency, detecting duplication
These are just the incredibly obvious ones and I’m sure you can think of many more. If any of these is something you’ve spent real time on and developed opinions about, the repo is open for contributions.
Skills + MCP vs. Skills + CLI
If you’ve already set up the dbt MCP server, you’re probably wondering how skills relate. Same? Different? Complimentary?
Short answer: MCP and skills are different things; they’re both useful; the relationship between them is pretty interesting. The original narrative was “they’re complementary: MCP helps with tool calling and skills help with expertise.” And that’s not wrong, but it’s insufficient.
The problem with that perspective is that it underemphasizes a real tradeoff. Simon Willison put it more bluntly, titling his October 2025 post on the subject: “Claude Skills are awesome, maybe a bigger deal than MCP.“ The drawback of MCP he pointed to was token consumption, as it injects full tool schemas whether or not they’re relevant, making every interaction less efficient.
The alternative approach, for developer-oriented products is skills + CLI. Benchmarks across 75 runs show CLI agents completing tasks at 1,365 tokens versus MCP agents at 44,026, almost entirely because the MCP server injected all 43 tool schemas in the Github Copilot MCP server into every conversation regardless of whether they were used. The CLI approach won on cost by 10–32x for these tasks, and hit 100% task completion versus MCP’s 72%, and adding an 800-token skill file to the CLI agent reduces tool calls by a third and latency by a third on top of that.
Of course, that’s all in a single fairly constrained study, and there are plenty of reasons why that may or may not apply in other contexts. The point is that the right way to do tool-calling is currently a bit up in the air; it will take some time to figure out best practices more definitively.
The Skills-Package-Manager Problem
The skills distribution layer is, let’s say, nascent. Lots of folks see the opportunity and are building similar products simultaneously, and there just hasn’t been convergence on requirements yet. This is fun to watch—we’re watching the infrastructure for a new category get built in real time.
There are a bunch of “skills package managers” out there but from what I can tell there are three that are in the lead: Vercel / skills.sh, Tessl, and SkillsMP.
My read: this infra is primarily done outside the context of the model providers (multi-platform benefits are real) and there doesn’t necessarily need to be convergence. The pre-AI analogy is npm, Homebrew, apt, PyPI: there has never been package manager convergence and I don’t think that needs to change.
The more interesting question to me is whether dbt’s package manager should build in native skills support. There’s an active discussion in the dbt-core repo right now proposing exactly that, essentially, dbt deps would install both packages and skills in one command. I kinda love the idea of dbt packages bundling their own skills—install dbt_utils and get the skills that teach your agent how to use those macros correctly. Zero-friction onboarding, skills as a first-class part of the project dependency graph.
At first glance, that feels neat. But the longer I think about it, the more it feels … pretty effing transformative. Imagine referencing dbt-datavault and not only getting a bunch of macros but also an entire set of best practices that your agent can automatically deploy.
I find this direction compelling and I imagine we’ll likely move in this direction, though (standard disclaimer) this isn’t a commitment. We’ll share more as we think it through, and please feel free to weigh in on the above discussion.
What I am confident about: native dbt skills package management and shared registries like Tessl and skills.sh aren’t competing. We list dbt-agent-skills on both and I don’t expect that to change.
Technical Knowledge vs. Best Practices Knowledge
If all of the above was this big download of background info on where we’re at with skills, this is the part that I’m genuinely curious about. What’s the role of “traditional” product documentation moving forwards? Training? Certification? We have invested a ton of time / energy / resources into building the expertise of an entire ecosystem of analytics engineers; will companies like us still do that in the future? Should they?
Here’s an interesting indicator: MSFT recently built a pipeline that automatically converts Azure product documentation into agent skills, continuously updated when the docs change. This is neat and serves a real need. But I think there is something missing in this approach.
Documentation typically answers one question: how does this product work? It tells you the syntax, the parameters, the valid inputs. That’s important but it’s not all that skills can do.
What documentation typically doesn’t tell you: how should you use this product? When should you reach for this feature versus that one? What does a well-structured project look like three years in? What are the patterns that seem fine today but create tech debt? What are the traps that experienced practitioners warn each other about in Slack but that never make it into the reference docs?
That second kind of knowledge—let’s call it best practices knowledge—is part of what we’ve tried to encode in dbt-agent-skills. Not just “here is the syntax for a unit test” but “here is how you should think about when to write a unit test, what assertions are worth making, and how to structure tests so they give you signal without slowing your CI down.”
Microsoft may not see best practices as their responsibility. That’s probably fair: they’re in a fundamentally different position than most software vendors. Auto-generating skills from docs may make sense for them, although over time I wonder if it doesn’t start going the other way around. In that world, skills, authored for agents and more empirically testable, get written first.
What This Is Really About
Here’s the thought I keep coming back to: just as the dbt community came together over the past decade to figure out the best practices of analytics engineering—what a good model looks like, how to structure a project, when to use a snapshot, how to write a test that’s actually worth running—I think it will come together over the coming year(s) to distill that knowledge into agent skills.
And IMO this skill-ification represents meaningful progress for us as a community. Best practices encoded in a skill propagate faster than best practices in a blog post. Disseminating knowledge in a blog post includes a tremendous amount of friction, where every single human reader has to do the work of reading, updating their mental model, and practicing the new skill. Distributing skills to an agent is frictionless.
They’re also forkable. There doesn’t have to be one right answer, and each divergent perspective is one that can potentially “win” in the open marketplace of ideas. It’s open source, but instead of OSS software, it’s OSS expertise.
dbt Labs has always had a value we call “moving up the stack.” The exact text: “We believe that all team members should seek to replace themselves on an ongoing basis by building processes, technology, and documentation that obviate their existing work. We have an abundance mindset: there is always more, and more valuable, work to do. Moving up the stack presents growth opportunities for both the individual and the team.”
Agent skills are one of the most direct expressions of this value I’ve ever seen. They push expertise—syntax, design, experience—down into the agent layer. That frees the human practitioner to operate at the top of their license: asking the questions that matter, interpreting results, making the judgment calls that can’t (yet) be “skill-ed”.
As always, I welcome your thoughts. And if you build dbt-specific skills, please send them my way.
- Tristan

