Ep 41: dbt Labs + Transform join forces on metrics (w/ Nick Handel + Drew Banin)
Tristan, Nick and Drew discussing the future of the dbt Semantic Layer.
Nick Handel, as co-founder at Transform, helped develop the popular open source metrics framework MetricFlow. Drew Banin, a co-founder at dbt Labs, helped build the initial version of the dbt Semantic Layer, which launched last year.
Transform was acquired in February by dbt Labs, and in this conversation with Tristan, they talk through their collective plans for the future of the dbt Semantic Layer.
Listen & subscribe from:
Key points from Drew and Nick in this episode:
What is a semantic layer? What are the necessary components of one?
Drew Banin:
So for me, it starts with the concept of a metric. You want to understand the business. When we talk about semantics, it's getting very specific about the meaning of something. And frequently the places where folks are most misaligned within an organization is about the meaning of AR or the meaning of a customer versus a user, an account, and a contact.
And so semantics is creating precision around these things and then consistency. And the idea of a metrics layer or a semantic layer is about to me, exposing that to a lot of other tools. So it's not enough to just see the number somewhere. It's about actually getting access to it in the tools that people use, whether it's BI or other kinds of data applications.
Can you paint a picture of the environment where a set of integrated tooling, including a semantic layer, exists? What were the problems sets that internal tooling people to work on this problem? How far did you get, what did it feel like to be a practitioner in this environment?
Nick Handel:
I wouldn't say that we knew we were building a semantic layer, but I would say that starting in 2014, we were working on metrics and tooling. It eventually definitely became a semantic layer.
I joined Airbnb's growth team as a data scientist. I was the second data scientist on that team. And that team had established a practice of that build-test-learn experimentation cycle. That was a really big part of what that team was doing.
And so as a part of that we were trying to run as many experiments as possible, and every time we ran an experiment, we would go and write a bunch of queries. And often, those queries ended up looking really similar. And they were really just trying to calculate metrics. And so basically, a week after I joined somebody on the data platform team said: Hey, I've been working on this little tool. It spins up a few jobs based on some YAML files and it outputs a set of metrics for your experiment.Â
And I thought that was really cool because I was writing a bunch of SQL. It was repetitive; every single time I'd have to make these small differences to say it's this experiment, it was launched at this time, it's a user-level experiment, or a device-level experiment, or something along those lines.Â
And I started using this tool and it was just magical for me because it really unlocked a huge amount of productivity. I went from running maybe one or two experiments at any given time, just because it took about a week to do the analysis for each one to running 15 to 20 experiments at the same.
It seems like the data analysis component of the experimentation workflow, rather than the engineering part of the experimentation process, was actually the bottleneck. Can you add to that?
Nick Handel:
I think it depends on what you're building. I think if you're building some very significant piece of infrastructure and you're trying to test that the page load time decreased by a hundred milliseconds because we switched over to this new render and it took us a few months to build it then sure.
But what a growth team is doing is oftentimes - I think that this is probably underappreciating the work of growth teams - but it's changing button colors and changing copy and relatively small changes that engineers can do really quickly. But one of the things that I learned running all of those experiments is that our product intuitions are very often wrong.
The best people will be able to get the right bet on product maybe 60% of the time. And you learn that by running experiments because you make a hypothesis, you run an experiment and you just see that it doesn't always turn out exactly like you think and there's a lot of interesting stuff in there.
But yeah; I was the bottleneck, my work was the bottleneck and I immediately felt that when I joined the company. And it's part of why I latched onto that tool because I was like this tool is what allows me to no longer be the bottleneck. And it's what allows me to actually do the interesting work, which is what happens when you get all of the metrics and you look at them and you're actually experiencing the kind of analytical workflow.
What does a really complicated, gnarly query generated by MetricFlow look like?Â
Nick Handel:
Yeah. So sometimes it absolutely is, and it's really good if the North Star metric of your company is a query that is that simple because it means that anyone can go and ask it and they'll always get it right. And that's really important.
But the reality is that when we do analysis, you can't just select count star from the company's big table and answer your questions, right? That's not enough. It might be enough to say the business is doing well because the number is up and to the right, but it's not enough to know why it's doing well.
It's not enough to know the nuances. And so when you want to do that kind of analysis, typically there are two types of questions that you can ask, right? And Tristan, you and I have talked about this. You can either ask a time series question and when you ask a time series question, you almost always want to either compare two metrics or you want to compare a metric across various dimensions.
And so when you want to do that, sometimes you need to pull a dimension in from another table. And in that case, a part of getting that metric definition correct is actually doing the join correctly between those two tables. And so if you want to calculate two metrics, sometimes it's quite complicated.
You have to go and calculate both of those metrics. Maybe one of them is not simple or maybe both of them are not simple. So you have to do the SQL, you have to do the filtering to construct that metric correctly, and then you need to join them across the time series. And then you need to be able to look at that unified data.
So the second type of question is: I would like to understand an entity, and I want to dig into that. Like I want to know about a user and I want to know a bunch of different metrics about this user, right? I want to know the last number of purchases in the last 90 days, the number of times they've contacted customer service, etc.
Whether you could ask a dimension question like are they a part of the team or the enterprise plan? Something like that. And oftentimes those types of questions lead to the most gnarly queries because you're pulling metrics from, tons of tables. You're pulling dimensions from tons of tables.
Oftentimes dimensions and metrics are not coming from the same tables, and you're pulling it all in and trying to figure out how to, do seven joins in a query. And those are the kinds of questions people ask. So you need to be able to generate that SQL.
What does a big SQL query look like, in terms of the number of lines, in the MetricFlow response world?Â
Drew Banin:
There are some optimizations, so I'm going to go with the optimized lines of SQL. I think hundreds of lines of SQL are perfectly normal and it's not unreasonable.
As analysts, we've all written hundred-plus lines of SQL to answer relatively simple questions, sometimes. It's just the way it is. And you don't want to encapsulate that logic in a model because you're just going to be doing it over and over again in slight variations that are different.
And I know because I did that also; I've put basically the same query across seven BI dashboards and had to figure out how to maintain that.
dbt Labs joined forces with Transform about a month ago. This is the only question that really matters: What is going to happen from here forwards? What's the plan? When are we getting the combined stuff?Â
Drew Banin:
The great thing is there is a pretty clear integration point between this amazing query planning, metric-enabled MetricFlow Python library and dbt.
There are a couple of things to figure out in terms of how we support all the capabilities of MetricFlow in the metric definitions inside of dbt schema.yml files. I think that we can right some of the wrongs that we've made in the specifications as a part of this process.
So that's one side of things, like actually updating dbt’s knowledge of what a metric is and how metrics relate to entities and other metrics, things like that. Another part of this is integrations. So up till this point, we have only shipped support for Snowflake. We are going to ship support for other data platforms as well.
So that part's happening concurrently. That'll help get a lot more of the semantic layer out into the world for folks on other databases. And the final big piece of this is the actual interface for querying a metric. And this is a place where we've got a couple of different options. But I can tell you with a lot of confidence that nobody's really happy about this metrics.calculate macro in its current incarnation. And I think there's actually some stuff that we can do there to make it a lot easier to query metrics from your BI or SQL tool. So we're thinking that over the next three to six months or so, we should be in a place where we can make the changes required to dbt: we can support new interfaces for querying those metrics, and we'll get support out the door for other data platforms. And from there, we have just got to keep working on making these great integrations with some of our partners, and other folks that are building really exciting things on top of semantic layers. They'll have more capable APIs to integrate with, hopefully for the next 36 months.Â
Nick Handel:
Can I pile on there and just say that I think it's really rare that you have two teams working on technology separately, and then you bring them together and everything is just perfectly aligned.
And I wouldn't say that it's like perfectly aligned, but it's pretty remarkable how complimentary all of the work that the two teams were doing. And I think that it's leading to what I think is going to be a pretty smooth kind of path. And I think, it takes time. It takes time to do the engineering work, but I am really excited about the product that we're going to be able to release.
And there's a lot of great work that we can do with our partners, as we move towards that to get them.
Let's go 24, 36 months out into the future. What does the world look like? How are practitioners working differently? How do companies interact with their data differently now?Â
Drew Banin:
I think we've seen this kind of transformational effect with dbt to date. So if you cast your mind back to 2016, 2017 you couldn't really just build a BI experience that query tables and showed you really great charts and dashboards, things like that. There was this baked-in assumption that you would have to do some sort of transformation.
It was inevitable that you'd have to do some sort of transformation in order to have a table that is ready to analyze. So we saw that with Looker PDTs, and Mode Analytics had some sort of create a table on a schedule.Â
So to me, I think the big longstanding impact of this work is going to be that people building tools can start with a new set of assumptions, which is that all the very important aspects of an organization the metrics, the entities, how they relate to each other.
These things are already defined and the challenge is not how do we create a really good interface for people to define this logic. it is how we create the best possible experience for interacting with it, understanding it, and sharing it. And I think it shifts us up this, pyramid of needs if you will.
And I think that there are going to be new types of tools that assume that you will have a semantic layer from the jump some subs they'll make. And I think for the people using those tools, it will better connect the actual work that they're doing day-to-day to the experience of interacting with data.
So less of open up a link to a dashboard and much more see that data in context in the place where you're already working.Â
Nick Handel:
Yeah, I totally agree with that. I think that there are so many companies in the past that have said I need to build a lightweight semantic layer into my product in order to accomplish the product experience that I want to deliver.
And it's catalogs, experimentation tools, BI tools, analytics tools, activation, reverse ETL tools. They all have thought about the concept of metrics. They've thought about dimensions. They've potentially tried to build SQL. It's a hard problem. And having this kind of foundation that allows companies to just from the beginning say, I am going to build the best-in-class version of this product on top of a solid foundation of metrics, I think is going to lead to much better, much more domain-specific data tools in the next few years.
There's also the experience that the analysts have who use this, and I think that's also an interesting thing to talk about. I think part of the role of an analyst is to do a lot of translation work. They interact with the domain expert in the business. They go back, they write some SQL, they show some data, and that person asks a follow-up question.
They're acting as this translation layer between the data warehouse and the person who's trying to make decisions and understand and do analysis. Everyone is capable of doing analysis. Very few people have the ability to freely ask questions and see responses and have that kind of flow to just do analysis and make decisions.
And I think that what this will enable with some of those new kinds of more dedicated ex-product experiences is that the analyst will actually be freed up from doing a lot of that translation work. And the metrics and interventions will just find their way into the tools, and instead, they'll get to focus on the really difficult analysis.
The stuff that really enables the business to discover net new opportunities requires disentangling complicated data. And that's the kind of analysis that I think analysts sign up for and they get excited about, and then they join companies and their jobs become being this translation layer.
I'm really excited to see that. because it's a real skill that I think people don't have the opportunity to flex enough of.