Modern data spaghetti. Brave data decisions. Data work ROI.
How the modern data stack has fueled a data sources soup and why this makes analysts work on "hard mode". Who needs to pay attention when analysts need to be brave. Emoji equations.
Greetings! đ
This weekâs issue once again changes up the format of this newsletter because thereâs so much great content out there đ. This week youâll find two meaty topics:
taming modern data spaghetti with an Open Source Data OS;
a riff on getting in the room and bravely making data informed decisions.
Youâll also find some shorter commentary on the ROI of data work and git workflows in reverse ETL.
Enjoy the issue!
-Anna
Psst! Smol plug: Tristan and I are speaking at Beyond.2021: Life after dashboards this week. Youâll hear from Tristan in the keynote first thing on Tuesday, and then from me a little later on the âRise of the Analytics Engineerâ alongside some friends. See you there! đ
đ Modern Data Spaghetti and the open source data OS
Itâs getting harder to be a successful data analyst. On the one hand, your success is measured by the speed with which you deliver insight1. On the other, in a bid to âbreak down data silosâ and âaccelerate the speed of data informed decision makingâ, you now need to contend with changes in an ever increasing array of data sources and destinations as you go about trying your darndest to do your job.
This week, Petr Janda shares a delightfully real story of what happens when someone DMs you on Slack and says: âDoes this chart look right to you?â đ§ Highly recommend reading the whole piece â itâs fantastic.
My favorite part though, is this diagram:
I love this diagram because it so precisely illustrates why analytics work today is getting harder. It is because analytics work today is ~10% delivering insight and ~90% going deeper into the matrix of data models and data sources to figure out âCan I trust this number?â. And thanks to the abundance of the modern data stack, the list of things to monitor and validate is constantly growing and changing.
Ian Tomlin, in an article that pairs very nicely with Petrâs, calls this phenomenon data spaghetti.
A quick recap of what data spaghetti is and how we got here:
Businesses are using more SaaS tools to get work done across departments like Sales, Marketing, Support. âď¸
A big promise of todayâs data ecosystem is breaking down pesky data silos created by these SaaS tools, and the way we do that today is through more cloud data services for repeatable, predictable data workflows like ingestion, reverse ETL, etc. âď¸
This newfound and tantalizing ability to ingest ⨠all the data ⨠from ⨠all the SaaS tools ⨠a business uses today comes with, rather predictably, a new challenge: growing complexity in our pipelines. âď¸
Growing complexity in our pipelines leads to a new set of data problems: the need to resolve identities across these many systems, overly intricate data quality and availability monitoring across a growing number of sources and destinations, and (also rather predictably) adding even more tools to the SaaS stack that now help you manage these problems too. đ
The end result is a procurement and business systems nightmare for everyone involved. Or as Petr puts it: âWhat a mess!â đ
đđđ Yes, yes it is.
While many of us viscerally feel the pain of this problem, weâre not all (yet) coming to the same conclusions on what this means for the future of our data systems. Right now, we at least agree that we have a few paths in front of us:
Option 1: the data mesh, in which we resolve this complexity by splitting our systems into multiple vertical pillars â each with a much tighter scope â and creating interfaces between them.
â Pros: better vertical integration within a pillar means itâs easier to figure out why something âlooks wrongâ and âwhat changedâ. Pillars get to choose their own tools without giving their head of IT a massive business systems integration headache.
đ¤ Cons: developing shared constructs across these pillars for insights purposes (like resolving identities in a users table) remains very very hard.
Option 2: the data fabric (or sometimes called the application fabric), in which we resolve this complexity through one massive Platform-as-a-Service to rule them all (kind of like Datadog for data).
â Pros: one API, one vendor, one bill.
đ¤ Cons: only one API, only one vendor, and one very large bill.
Option 3: the open source data OS, in which we resolve this complexity through open standards and shared protocols that enable different tools in the modern data stack to talk to one another.
â Pros: youâre not limited by the number of vendors you work with because the burden of integration and standardization is on them, and not on you, the data professional. Open standards and consistent definitions across vendors mean that you can contribute to a growing ecosystem of solutions that stitch various workflows together â and benefit from the ones that others create.
đ¤ Cons: itâs not immediately obvious how procurement will work. We still need to agree on and build some of the pieces to make this real! đŹ
My money (rather literally) is on Option 3 ;) What about yours?
Elsewhere on the internetâŚ
đOn making decisions and getting in the room
Cindy Alvarez cautions us this week on the trap we may fall into if we allow our stakeholders to drive our roadmaps. Though her focus is on qualitative research, the experiences she describes mirror those of analytics quite well. Take the below chart and replace the word âresearchâ with âdataâ or âinsightsâ and youâll have hit the heart of an ongoing values misalignment within organizations that want to be data informed.
Cindyâs recommendation to break out of this cycle is this:
The next step is that, as researchers, we need to eavesdrop and interrupt.
We need to skim docs and chat rooms and listen in on meetings until we hear some of those unchallenged assumptions and unsolved customer mysteries, and then volunteer to seek out answers. Itâs great if stakeholders jump in and collaborate on that research; and even if they donât, we arenât asking for permission.
Our job is not to convince others of the value of research â itâs to create value through uncovering information that can change minds.
If this sounds familiar, then you have also done time on an analytics team in a semi large organization đ Analysts spend a lot of time simply trying to get into the room. When data informed decision making isnât happening, we put it on ourselves to bust down the proverbial doors, and communicate in a way that changes minds.
This feels very different from the world Benn describes in his post this week, âDoes data make us cowards?â. In Bennâs post, we see a world in which folks jump on data availability to help avoid making a difficult decision â a polar opposite experience. Where, you might ask, is the disconnect?
Both are actually symptoms of a similar problem. In Cindyâs example, a hypothetical organization is at a stage of maturity where assumptions are made without prior study. In Bennâs, the organization wants hard to be data informed, but the incentives to do so well are not fully realized. In both situations, we ask analysts/data scientists/researchers to step up and âbe braveâ:
In those moments, when the path ahead is foggy and peopleâs opinions are divided, it doesn't matter how smart we are if we're at the head of the table. When the room turns to us, they arenât looking for a final insight to nudge one option ahead of the other. They need to know our opinion. They need to see our conviction. They need us to be courageous.Â
Without that courage, weâre just clever puppets, dancing to whatever tune our data sings to us, hoping nobody sees the wires weâre submitting to. To be real leaders, we have to prove ourselves brave enough to know when to walk on our own.
I completely agree that it is incredibly important for the folks in the room who have developed an educated opinion based on data/research to step up and share those opinions. I also think it is incredibly important for leaders to pay attention to systems of incentives, power and accountability in their organizations â asking analysts to step up and âbe braveâ, to âeavesdrop and interruptâ is asking them to apply guerrilla warfare tactics against a major opposition. It is a smell that can help point to the shape of that opposition in your organization â pay attention to it!
đ˛đ˛đ˛ Data Work ROI
Mikkel Dengsøe has admirable emoji game, and a timely and thoughtful piece on articulating the ROI of a data teamâs work. We often talk about âcustomer facing data productsâ and âinternal data analytics applicationsâ as two entirely separate phenomena. Mikkel connects the dots in this post and reminds us of the different ways data roles on a team collaborate to create value:
The important takeaway for me is one of prioritization, especially for folks who sit closer on the systems side of the data jobs-to-be-done spectrum:
If youâre a Systems Person constantly evaluate how your work impacts downstream consumers (đ), how many consumers you have (đł) and how much time you spend (âď¸).
The implication of this is that an Analytics Engineer will have far greater ROI on their time from improving a data model used by 5 data scientists than by helping improve the workflow of only one. And a data engineer will have greater ROI on their time from improving the performance of the data platform because this platform supports every other data team member, and this has a multiplicative effect downstream:
Remember that everyone plays a role.Â
Data Scientists and Data Analysts work faster if they have high quality data and good data models. Analytics [Engineers] and Data Engineers have multitudes of impact if their data models are used by many.
Reverse ETL, now with version control
Finally, Hightouch.io dropped a Git integration this week that works with all your favorite software repositories. Itâs great to see more and more pieces of data workflows adopting software engineering best practices! Highlights that Iâm particularly excited about:
describing data workflows as code,
consistency across local and browser-based development workflows, and
interoperability with an open source protocol (git).
đłđĽđĽđĽđłđĽđĽđĽ
đ Until next time!