Mapping Pedestrians. Hadoop's Failure. AI's Carbon Footprint. Efficient Neural Networks at Google. [DSR #190]
❤️ Want to support this project? Forward this email to three friends!
🚀 Forwarded this from a friend? Sign up to the Data Science Roundup here.
This week's best data science articles
Accelerating Uber's Self-Driving Vehicle Development with Data
A key challenge faced by self-driving vehicles comes during interactions with pedestrians. (…) Through data, we can learn the movement of cars and pedestrians in a city, and train our self-driving vehicles how to drive. We map pedestrian movement in cities with LiDAR-equipped cars(…)
This is super-cool. It’s been clear since the very earliest days of self-driving that cars would be massive IOT endpoints, but it’s fascinating to watch that play out. Uber’s self-driving division isn’t just analyzing roads via its LiDAR-equipped cars, it’s analyzing the people walking on those roads.
To anyone even slightly familiar with deep learning it’s easy to see how doable this would be, but it’s also fascinating to take a step back and think about how mobile, internet-connected, autonomous endpoints are actually mapping us. Cue Keanu: whoah.
The article presents a fascinating (if high-level) overview of what it looks like to use this type of data in practice. If this isn’t what your day job looks like today, it might be in the future.
Why Hadoop Failed and Where We Go from Here
With Cloudera losing 42% market cap and MapR signaling they are about to close shop, this week left little doubt that Hadoop is on the way out. So, what happened? And where do we as an industry go from here?
Wow, shots fired! The author explains the recent downturn experienced by the big Hadoop vendors by the fact that they ultimately failed to deliver on the value that they were selling to enterprises: a cheaper data warehouse. That was never really the strength of Hadoop in the first place.
I can’t remember the last time I spoke to a company that I thought was doing great things in data and they mentioned Hadoop as a part of their stack. Enterprise software takes a long time to die, though.
Training a single AI model can emit as much carbon as five cars in their lifetimes
File this under “not at all surprising”. Note, though, that the model that is 5x as pollutative as a car has 213M parameters and includes a neural architecture search—it’s iterating through lots of models to find the optimal one. This is a particularly expensive model to train (and generates a particularly clickbait-y headline!). The other models with data presented have far less impact.
The carbon footprint of compute is an important topic, but it’s not one that I’m sure there is much to do about as individuals. There are a well-known set of policy options (carbon pricing, renewable incentives, etc.) that we in the US seem largely unwilling to use but are increasingly being adopted throughout the world. There is already a large financial incentive to reduce spend on compute to train a model, and that will proceed as fast as the research allows; the dial we have more control over is how carbon-intensive the compute is.
www.technologyreview.com • Share
Google: EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling
Speaking of model efficiency, Google just released research on EfficientNet, an CNN architecture that scales far better than existing architectures. The chart above summarizes the results against other state-of-the-art networks—impressive to say the least.
We’ve been in the big bang of ML/AI over the past 5-7 years, and the industry is only fairly recently starting to care about mundane topics like efficiency. As the current wave of technology matures, my guess is that there will be significant gains to be made in this area. Right now we’re running on gas-guzzling two-speed transmission hot rods from the 1950’s!
Wow, this was quite a saga—it really captured a lot of attention in the AI community. Here’s what happened:
Back in February, OpenAI announced that they had trained a groundbreaking language model called GPT-2. Uncharacteristically for them, they decided to not release the model out of concern for what malevolent actors would be able to do with it.
On June 6th, Connor Leahy wrote this post announcing that he had replicated the OpenAI result and would be releasing the full model.
On June 13th, Connor wrote this followup stating that he had spoken to a bunch of industry folks (including OpenAI) and that he wouldn’t be releasing the model after all.
These specific events (and the linked posts) are not that interesting. What is interesting, however, is the very active conversation surrounding AI safety. There’s so much to say on this topic—far more than I can get into here—but seeing just how much interest these events got over the past week made it clear just how seriously at least parts of the AI community are starting to take this issue.
Weight Agnostic Neural Networks
Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task.
What a silly thing to want to do! Why would anyone want to build neural networks whose connection weights are all identical (and selected at random!)?
It turns out that eliminating a major area of deep learning complexity allows for more effective, and efficient, neural architecture search. The authors come to some useful conclusions at the end.
Long, unusual, and interesting.
weightagnostic.github.io • Share
Thanks to our sponsors!
Fishtown Analytics: Analytics Consulting for Startups
At Fishtown Analytics, we work with venture-funded startups to build analytics teams. Whether you’re looking to get analytics off the ground after your Series A or need support scaling, let’s chat.
www.fishtownanalytics.com • Share
Stitch: Simple, Powerful ETL Built for Developers
Developers shouldn’t have to write ETL scripts. Consolidate your data in minutes. No API maintenance, scripting, cron jobs, or JSON wrangling required.
The internet's most useful data science articles. Curated with ❤️ by Tristan Handy.
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
915 Spring Garden St., Suite 500, Philadelphia, PA 19123