Model-Based RL @ Google. JupyterLab. Deep Learning Hardware. A Brewery Tour. [DSR #176]
❤️ Want to support this project? Forward this email to three friends!
🚀 Forwarded this from a friend? Sign up to the Data Science Roundup here.
This week's best data science articles
Google AI: Introducing PlaNet: A Deep Planning Network for Reinforcement Learning
Reinforcement learning is one of the primary areas of research in AI today. Most approaches to-date have been model-free—agents attempt to predict the future directly via observation of millions of sequences of images without an underlying understanding of the dynamics of the system they’re operating in.
Model-based RL, in contrast, attempts to have agents learn how the world behaves in general. Instead of directly mapping observations to actions, this allows an agent to explicitly plan ahead, to more carefully select actions by “imagining” their long-term outcomes. Model-based approaches have achieved substantial successes, including AlphaGo, which imagines taking sequences of moves on a fictitious board with the known rules of the game. However, to leverage planning in unknown environments (such as controlling a robot given only pixels as input), the agent must learn the rules or dynamics from experience. Because such dynamics models in principle allow for higher efficiency and natural multi-task learning, creating models that are accurate enough for successful planning is a long-standing goal of RL.
The article goes into detail on PlaNet, a collaboration between Google AI and DeepMind that models the dynamics of its environment and attempts to plan forward using that model.
There wasn’t a ton of fanfare around this launch but I think this is a big deal. Also, PlaNet is open source.
AI Safety Needs Social Scientists
Properly aligning advanced AI systems with human values will require resolving many uncertainties related to the psychology of human rationality, emotion, and biases. These can only be resolved empirically through experimentation — if we want to train AI to do what humans want, we need to study humans.
This is the first publication on Distill in quite some time, and it’s by OpenAI. It’s an interesting topic: to the extent that we care about “AI alignment” (and we should…), we need to know a lot more about ourselves before reliably being able to express our own optimization functions.
As always, OpenAI’s work is quite long-term focused, but I find it relevant to pay attention to what they’re thinking about. They’re living in a future that the rest of us will live in in coming years.
Git Your SQL Together (with a Query Library)
Short and sweet. This recommendation is a bit like the “analysis” folder of any dbt project—start with just checking your queries into git and then over time evolve this repo into a real data model for your org.
Jupyter Lab: Evolution of the Jupyter Notebook
I admit it—I’ve never played with Jupyter Lab. This post presents it as the natural successor to Jupyter Notebook, and that’s how the project presents itself. I’m fairly ambivalent about that prospect… Jupyter Notebook was a novel user interface that packaged up several technological achievements. Lab feels like a clunky UI layered on top of the core notebook experience that resembles RStudio. I don’t need a single IDE to do everything data-related.
towardsdatascience.com • Share
Yann LeCun on the future of deep learning hardware
LeCun says the demand for DL-specific hardware will likely only increase. New architectural concepts such as dynamic networks, associative-memory structures, and sparse activations will affect the type of hardware architecture that will be required in the future.
“This might require us to reinvent the way we do arithmetic in circuits,” LeCun says. Computer chips today are typically not optimized for deep learning, which can be effective even when using less precise calculations. “So, people are trying to design new ways of representing numbers that will be more efficient.”
The link is to a video; it’s 6 minutes long. Worthwhile.
Brewery Road Trip, Optimized With Genetic Algorithm
Visit the best American breweries of 2018 while minimizing travel time and distance.
Fun and straightforward. My favorite part is actually watching the animated gif at the end as the genetic algorithm iterates through successive generations.
Thanks to our sponsors!
Fishtown Analytics: Analytics Consulting for Startups
At Fishtown Analytics, we work with venture-funded startups to build analytics teams. Whether you’re looking to get analytics off the ground after your Series A or need support scaling, let’s chat.
www.fishtownanalytics.com • Share
Stitch: Simple, Powerful ETL Built for Developers
Developers shouldn’t have to write ETL scripts. Consolidate your data in minutes. No API maintenance, scripting, cron jobs, or JSON wrangling required.
The internet's most useful data science articles. Curated with ❤️ by Tristan Handy.
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
915 Spring Garden St., Suite 500, Philadelphia, PA 19123