Love the format of this - high quality long form content is coming back (and your consumption habits + newsletter analytics reflect that) and your stuff definitely cuts through the noise.
As regards to your future of BI question - that's something I've been thinking a lot about specifically around ingestion. There's tons of tools out there for english to SQL or english to graph type worfklows (https://askedith.ai, https://textql.com/, etc.) - but not a ton around automating the ingestion and analytics layer, which I've seen is often the thing that's stopping data from being really accessible to business users.
You're a good writer. I enjoy this weekly update a lot. In fact, it is the only newsletter I enjoy reading and I signed up for a lot (not as many as you, but a in several dozens) so keep it up!
In relation to the Future of BI question: I tend to think it's going to be status quo at least for the next year. There isn't any of the backend infrastructure of LLM I've seen that makes it easy to set up to answer BI questions in human readable form. Maybe I am missing it but the equation is still way harder to setup automatic BI ingestion and analytics layer than to just hire a person to do it. But similar to you, I am very open to changing my mind on the position and I think there are possibilities of radical transformation in the area, but the ease of switching isn't there yet.
Regarding your last comment around BI in the world of AI is that to get to disruption (which will happen) the systems need more context around the systems themselves and having both access and “knowledge” of the metadata. As an example you can throw a query into ChatGPT and ask it to give you a way to improve it and it will give you some useful, but generic info. But now imagine if there was a model that had access to the query logs, size of tables, cardinality, etc and knew what it meant.
More broadly I wonder if we’ll actually start generating different ways in order to make it more accessible to these AI systems.
Reading the Clouds. Llama 2 and Licensing.
Love the format of this - high quality long form content is coming back (and your consumption habits + newsletter analytics reflect that) and your stuff definitely cuts through the noise.
As regards to your future of BI question - that's something I've been thinking a lot about specifically around ingestion. There's tons of tools out there for english to SQL or english to graph type worfklows (https://askedith.ai, https://textql.com/, etc.) - but not a ton around automating the ingestion and analytics layer, which I've seen is often the thing that's stopping data from being really accessible to business users.
You're a good writer. I enjoy this weekly update a lot. In fact, it is the only newsletter I enjoy reading and I signed up for a lot (not as many as you, but a in several dozens) so keep it up!
In relation to the Future of BI question: I tend to think it's going to be status quo at least for the next year. There isn't any of the backend infrastructure of LLM I've seen that makes it easy to set up to answer BI questions in human readable form. Maybe I am missing it but the equation is still way harder to setup automatic BI ingestion and analytics layer than to just hire a person to do it. But similar to you, I am very open to changing my mind on the position and I think there are possibilities of radical transformation in the area, but the ease of switching isn't there yet.
> Is this working? This whole ‘get all my news from newsletters’ thing?
Not at all. That's why some of us rely on human aggregators to summarize whats new in newsletters like this 😏
Never mind the "links around the web" section wasn't added in this edition...
Regarding your last comment around BI in the world of AI is that to get to disruption (which will happen) the systems need more context around the systems themselves and having both access and “knowledge” of the metadata. As an example you can throw a query into ChatGPT and ask it to give you a way to improve it and it will give you some useful, but generic info. But now imagine if there was a model that had access to the query logs, size of tables, cardinality, etc and knew what it meant.
More broadly I wonder if we’ll actually start generating different ways in order to make it more accessible to these AI systems.