It is people. It is people. It is people.
What we learned from Coalesce 2021: why most data problems are about people, how to meet people where they are, and learning from open source to empower decentralized data advocacy.
This week, we did a thing. Coalesce 2021 finally came together, live and online with over 5,000 humans tuning in.
You can be forgiven for wondering if Coalesce is, in fact, a data tech conference or something else entirely. Depending on when you tuned in to Coalesce this week, you may very well have started your experience with a purple shaman, an elf š§āāļø , a talking cookie, or a visual of Chris Jenkings chanting in Maori:
He TÄngata. He TÄngata. He TÄngata.
I wonāt attempt to summarize the 41 hours of content in this Roundup. Instead, Iām more interested in zooming out and taking it all in. What did we accomplish here this week, as a community? For whom? And in what ways does it matter?
-Anna
This is just the beginning
Chris Jenkinās session on TÄngata, an open source library powering an editable data catalog, is emblematic of the larger themes in the conference this week: open data tools and using them to build better human interfaces.
Somewhere around 12 mins and 20secs in the replay, Chris tells the story of a Maori proverb that, when roughly translated, reads:
What is the most important thing in the world? It is people, it is people, it is people.
He TÄngata. He TÄngata. He TÄngata.
This is a theme woven into all of the content presented and the conversations that were started this week (and are still going!).
It is people.
Many of the talks this week reflected on what can sometimes feel like absolute chaos in both the ways our tools are evolving, and the ways our roles as data practitioners are evolving.
On Monday, Tristan Handy and Martin Casado talked about āHow big is this wave?ā in reference to the massive explosion of tooling in the modern data stack. My biggest takeaway from their conversation was that weāre not yet done with changes in our tools, our jobs, and our industry. As the market continues to evolve, Martin says, it will create vacuums, and new tools will continue to emerge to fill those gaps. And in fact, the rate of change might still be accelerating. As Alan Cruickshank put it in a later session, this is just the beginning.
āDealing with data is dealing with the complexities of the universeā, Martin says. āThe limit to which you can apply data to is the natural world, and there is no limit to the applications that you can build on top of dataā.
I didnāt fully internalize the magnitude of what Martin is saying here until I watched a replay of the conversation. Applications built on vast amounts of data will be as ubiquitous tomorrow as software is around us today. Today we can no longer imagine a world without programmable interfaces, and we are just beginning to connect them together. Imagine the possibilities when our data application stack becomes as robust as the software stack is today, and the sheer volume of data we have collected starts to be combined in new and interesting ways.
The more our data stack follows the evolution of software (that is, away from massive all inclusive IDEs towards highly modularized open source libraries and frameworks), the more this will enable specialized data applications to flourish. The more data is collected and combined, the richer those applications will be in solving human problems. There are murmurings of this today: Google Maps not only navigates traffic for you, it now also offers eco-friendly driving routes that use less gas. But only Google has access to data that powers this information today. Imagine if citizen scientists did, and they also had the tools to easily build on top of this data. What interesting applications will they build to help tackle climate change?
Are we a few years or a few decades away from this world? We have no way of knowing. But we can predict that the chaos weāre feeling is most likely ahead of us for years to come. How do we navigate this growing complexity? How do we adapt as we redraw the map of whatās possible with data in orders of magnitude every year?
By meeting people where they are.
It is people.
Meeting people where they are
On Tuesday, we argued over whether āData Scientistā is a still a helpful title, or whether it is holding us, as an industry, back from where we want to go next. Reading Emily Thompsonās rebuttal to Emilie Scharioās talk made me wonder ā what if itās both?
Emilie Schario makes a strong case that the āData Scientistā job title is often used as a kind of escape valve when organizations donāt really know what they need from their data teams. Instead, we as practitioners should be encouraging our current and future bosses, HRBPs, and employees to think harder about what we need data humans to do, and create appropriate career paths for them to do this. If you are in a position privileged enough to shape these conversations, you must absolutely do so.
At the same time, not everyone is there yet. As Emily Thompson points out, many organizations genuinely do not yet know what they need from their data teams. Their bias is to hire generalists who can do a little bit of everything and help them figure it out. Depending on who is doing the hiring, they might hire a Data Scientist first, or a Data Engineer (although Stef Olafsdottir makes a convincing case for why you shouldnāt start there).
And while all this is continuing to evolve, Data Scientists and any data practitioner with āEngineerā in their title still earn more money than Analysts, despite very often doing similar work. Our arguments over names are really arguments over the value of the work humans do, and the recognition of that work through meaningful compensation and career paths.
It is people.
To properly address this problem, we need to meet people where they are today. We, data leaders, need to help navigate the chaos that lies ahead by focusing on the people, what they need from us in that moment, and what our organizations need in that moment.
Everyone is building this ship together as weāre sailing it. Some of us are further along than others, but weāre all in the same boat.
How do we do this?
Open source and interfaces
In āHow big is this wave?ā, Tristan and Martin talk about enabling decentralized and distributed data tooling (read: more different kinds of data tools) that all work independently through common interfaces. Weāve learned from examples in software engineering that the only way to do this is through open access to source code and collaborating on open standards.
Weāre trending in the right direction here, tooling wise. As Katie Hindson showed us this week, you can already build an entire data stack on top of open source tools today. dbt is at the heart of this stack, or as Martin put it, āthe glue that stitches together this massive expanse of the data industryā. It is imperative that the right elements of dbt remain open to the data Community to help all of us get to this future.
Tristan has written this week about how we plan to keep new features of dbt as open as possible while trying to build a sustainable business. TL;DR: The much anticipated metrics definitions and the ability to compile them will land in the open source dbt Core project because this layer must become an open standard in our industry. And the remaining components that youād need to run your own service using dbt Server will be source available, keeping this technology open for the Community to evolve together.
Weāre well on our way to breaking down barriers in our tooling and developing common data interfaces and standards.
But what we learned this week at Coalesce is that we're less good at breaking down barriers and developing interfaces between the people in our organizations. Something else Emily Thompson wrote that stood out to me:
The real problem I see is that we as an industry havenāt figured out how to optimally integrate science-practitioners with product developers, at least not consistently.
I agree. Also, how to integrate data practitioners and marketing wizards, data practitioners and sales champions, data practitioners and customer support heroes, data practitioners and [insert company function here].
There are really only a few pathways to solving this problem of human integration:
Bring more and new kinds of data humans into the various functions you need. For an example on hiring new roles, see āNew Data Role on the Block: Revenue Analyticsā from Celina Wong. For some advice on training for new skills, see āGit for the Rest of Usā, from Claire Carroll. For advice on how to properly set up a hiring pipeline to build a diverse team, see āBeyond the Black Boxā by Abia Obas.
Build better tooling that acts as an interface for humans to collaborate with each other, or create more options for folks to use data where they work. Make it easier for folks to focus on the thing they need to do together, and abstract the rest. For some examples see:
āSo you think you can DAG? Supporting data scientists with dbt packagesā from Emma Peterson.
āThe future of Analytics is Polyglotā from Caitlin Colgrove.
āThe Endpoints are the Beginning: Using the dbt Cloud API to build a culture of data awarenessā from Kevin Hu.
Change what you incentivize in folksā roles to encourage the right interfaces to emerge. Hire for data literacy at the decision making layers, and incentivize everyone in the organization to be accountable to data goals. For an example, see āScaling Knowledge > Scaling Bodies: Why dbt Labs is making the bet on a data literate organizationā.
Which one is the right path? They all are. Managers and Directors of functional areas have the power to influence #1. ICs on data teams have the power to influence #2. Leaders at every layer of your organization have the power to influence #3.
Decentralized, and distributed data advocacy (that is doing #1, #2, and #3) is how we will make slow and steady progress amidst the chaos that this new wave of tooling will introduce. But we donāt have to have all of our ducks in a row š¦š¦š¦ ā change starts in any one of these areas, regardless of where you sit in your organization. And there are something like 77 sessions with ideas for you on how to enact change where you are.
Where will you start?
Elsewhere on the internetā¦
Benn Stancilās take on the Analytics community, post Coalesce 2021:
The analytics communityās success is proof that, even if software eats the world, software culture doesnāt have to. Itās proof that the good things about the tech industryāthe ambition, the impact, the financial opportunityāarenāt the causes of the bad. Itās proof we can throw out most of the bathwater, and keep the baby.3
Given how much the data industry tries to learn from software engineering, maybe itās time to go the other direction. Maybe itās time for community leaders to push outward, into the tech industry and into its newer adjacencies, and to start teaching rather than learning. There are a lot of dark spots in tech, and what a light, my lord, is needed to conquer so mighty a darkness. But the best in this community have that brilliance. Itās time others get to see it.
š If this doesnāt capture the vibe of the week, I donāt know what does. Thank you for this write up, Benn!
And although a lot of our corner of the data sphere was immersed in Coalesce this week, other things happened as well š:
Databricks launched their own VC fund
Congrats to folks over at Databricks! Excited to see what brave new spaces will be created by the companies they fund ššš
The Concept Layer
by Ian Fahey
This is an interesting take on building a shared internal knowledge graph of a business. Despite the similarity in name, this is not actually a take on the metrics layer as a data product. It's one data teamās strategy for the delicate balancing act between standardizing data definitions and keeping clean, canonical data easy to remix and explore. Their solution: defining a canonical set of tables that represent the most important business entities and pairing this with event tables and standardized dimensions to create structured, easy to explore data that's consistent across the business. āØ Very neat stuff!
Also this week, Ashley Sherwood started a substack. The latest post on the hard work of having an opinion is a must read.
Weāve talked a lot in the past weeks about bringing data professionals into the room where decisions are made. But what do we do once we get there? How do we make our case based on data? How much opinion should we have? Ashley breaks it all down for us. Some of my favorite bits:
Use data where itās useful, but look for business, process, human, and ethical constraints as well.
Data is not a panacea and just because data says something doesnāt mean itās always the right thing to do. Hear hear!
I also appreciated the distinction between making an evidence based claim and an evidence dump. The latter is when your data professional rattles off a bunch of facts, and leaves you on your own to divine a recommendation (something all of us have been guilty of at some point).
Good opinions are mutableātheyāre open to change in the face of new, different information.
This is important. You donāt have to know everything to make an evidence based claim. You need to know just enough and state your assumptions and context, which may very well change over time. And you donāt have to qualify your claim with every assumption you make. Just be prepared to pull up more data to support your claim.
On how much is enough, this is probably my favorite part of her piece:
If you tend to err on the side of hasty judgment, take a moment to intentionally look for evidence that contradicts your opinion. [ā¦] If you tend to err on the side of endless hours of research before making an opinionāstop! Make an opinion right now. Even if you donāt have any information. Stretch your brain, do your best, try to logic it out. With every new piece of information, pause and reformulate an opinion. This is good for your critical reasoning muscle, and means youāll never be caught without at least some opinion.
On opinions vs preferences:
You may find that the quality of your opinions increases when you no longer feel like you need to justify your personal preferences because it is entirely reasonable to hold an opinion that contradicts your preferences.
For example: āI prefer reading articles, but with the evidence that more people learn better from video presentations, I think we should invest department budget in recording more video training for our content.ā
Being able to separate your own priors from data on what others need is so important, and what makes for a truly great data professional. Youāre not being inconsistent, youāre being self-aware, empathetic and acknowledging your biases.