Creating value from GenAI in the enterprise (w/ Nisha Paliwal)
Capital One's head of enterprise data tech on building a strong data culture for what comes next
Welcome to Season 6 of The Analytics Engineering Podcast. Thank you for listening, and please reach out at podcast@dbtlabs.com for questions, comments, and guest suggestions.
Nisha Paliwal is the managing vice president of enterprise data tech at Capital One and the co-author of the book Secrets of AI Value Creation.
Nisha and Tristan discuss everything from how to get children excited about coding, to heterogeneous data environments, to building strong data culture.
Join data practitioners and data leaders in Las Vegas this October at Coalesce—the analytics engineering conference built by data people, for data people. Use the code podcast20 for a 20% discount.
Listen & subscribe from:
Key takeaways from this episode
I feel like Capital One almost invented the modern credit card industry, the data-driven practices of segmenting customers based on traits and then giving them specific credit card offers based on those traits. This is all stuff that Capital One pioneered, and it's all driven by data. Is that fair to say?
Nisha Paliwal: There’s a very old paper—you can Google “IBS Information Based Strategy”—on which Capital One was formed. It's a public paper. You can download it and read it. It has a lot of history of Capital One. Our founder, who's now our CEO, Rich Fairbank, started the company 30 years ago on this IBS strategy. Information equals data, right? So data is everything to this company, and has been everything to this company for many, many years.
Everything we do in this company, there’s a prefix and suffix of data. Where the world is now swirling on many things, like AI, it starts with data. It starts with a lot of these practices, which are engraved in the company.
It's fairly common for a digital native company founded from 2008 onwards to think of itself as a data company, but it's a bit more unusual for a company founded in 1994 to think of itself in that way.
Yeah, so think of all the aspects of roles that we have to perform, whether it's risk or legal, or HR, or even our tech roles for that matter, right?
If your data thinking is backing all those roles, then you suddenly have a different culture in the company. A lot of things you hear at Capital One come from empirical evidence. And where does empirical evidence come from? Data, of course. So whether it's our job as technologists or HR or legal, what is that empirical evidence? What is the data telling us? We are guided by data in every strategy conversation, which is very different than many companies.
And then the other point I'll make is about risk management. For all 50,000 people in the company, all our jobs are risk management. And how does one do good risk management? There are a lot of people who go from Capital One and come back, and when we see it outside, we’re startled on how these companies do risk management.
We deal with people's money, and so we need to have a risk management angle. Where does good risk management come from? Knowing your data.
Some of this is engraved in the culture. That's how we train everybody. That's how everybody's coached. That's our common vocabulary.
How does that impact the type of humans that you bring onto the team? Capital One is a very quantitative and data-driven financial institution. Does this make it hard to hire people from other financial institutions who might have different practices? Is there culture shock?
That answer will depend on the teams and level, of course, but the ingredients to be successful I can talk about.
One is learning. Your appetite for learning has to be insurmountable. I can't imagine the type of learning I've had in my years at this company.
Second, which is unique in this company, is a very intense strategy process. And that strategy process is happening every year. It's not two to three years or five years. We get to hear from our CEO every year on the strategy. Everybody coming in gets a lot of training, coaching, onboarding to understand how this company operates.
Our onboarding is so unique because we put a lot of rigor around it. I have 30 or 35 interns, some are interns for 10 weeks. We make sure we surround them with the ecosystem that’s needed to thrive in a place like this.
Data doesn't know which role you are in. Data is everything to a company. And when something that important is everything to the company, we are talking of culture. You have to make sure anybody walking in the door, regardless of job, sees the importance of what we are doing.
Data is how you make decisions, how you close books, how you do anything. Don't we all want to have trust in the data that is in our hands? Data doesn't lie. But for data not to lie, it has to be correct, so I can trust in that data. And this is where the collaboration point comes in, because no particular job will ensure that it is right. But collectively, we can make sure it is right all the time.
You recently co-wrote a book called Secrets of AI Value Creation. Can you tell us a little bit about that book and what you cover in it?
The beautiful part of the book is it has the framework for how we can think about AI. A common question that I hear from many CEOs when I go to any conferences is, “Okay, where do I start?” So the book does a great job of giving that framework. I’d say it's more for business CEOs who want to start somewhere.
It's not a heavy tech read. It doesn't go into a lot of tech but it goes into this framework, which gives them all the components that we need to think about.
Let's imagine that you're at some conference and a CEO of a large organization approaches you and they ask you the question, “I haven't yet tried to figure out my AI strategy. I'm worried that this is all a bunch of hype. Do you think that I should roll up my sleeves and dig in, or do I need to let this thing cook for another couple years before it's really worth my organization's time?” What do you say?
I’d say start with the four-step framework in the book: vision, strategy, architecture, and execution. I think on tech topics, we often start execution too quickly.
The first question I’d ask is, where are you taking your company? And does any tech matter for the direction that you are taking? And then get into these steps of strategy and architecture.
Strategy guides you in that direction of where to start. Do I start small? Do I let it bake?
The biggest mistake with waves of technology I have often seen is we go to execution first. And then because we haven't set up that vision and strategy, we struggle to find the ROI. The book explains this framework with tons of great examples from great companies on how they have approached all of this.
I feel like one of the things that's so hard for leaders to reason about with AI is that it's hard to know what’ll work. It feels like hard to engage with AI today, because you can't really ask somebody unless they're one of the very few leaders in the field and say “Hey, I want to do X. Is that even possible? How do you escape this circular logic path?
That's the R&D research part you're talking about or the innovation part. There are endless resources; I don't think this topic is new anymore. You should see the AI hackathon that my daughter does in high school.
The question is, where do you want to invest in that technology? You need to invest very carefully because these things are ’t cheap.
These are places where we put a lot of time and energy without an ROI. That's the value creation part of it. So how do you start small enough? And of course, this is a data podcast. You can’t do this without data.
The book starts out with a bit of a cautionary statistic. I think it was something like 85 percent of organizations are failing to live up to their AI goals. That clearly creates urgency around having a framework, having best practices, et cetera. What does a failed AI project look like?
What we were hearing when we wrote the book was often a lack of clarity on the areas where they want to use AI.
We talked to a lot of CFOs who were struggling with whether it was an investment they wanted to pursue. We saw data quality issues. Do I trust the data? I don't think the problems are unique between industries.
Are there the dead-obvious use cases for AI yet that everybody has decided is a clear win?
A hundred percent of everything is never a true thing. But I’d say what I’ve read is everybody is doing some sort of customer interaction.
I think for AI to get mainstream, we still have to get very comfortable with the usage and the result of it. We’re still putting a lot of humans in the loop to validate, because we need that validation.
This newsletter is sponsored by dbt Labs. Discover why more than 30,000 companies use dbt to accelerate their data development.