To listen to the podcast yourself, tune into your favorite podcast app: Apple Podcasts, Spotify, Google Podcasts, Overcast, Castbox. Or visit The Data Exchange website.
In a recent episode of the Data Exchange podcast, host Ben Lorica spoke with Nir Shavit, Professor at MIT and Neural Magic's CEO, about the present and future state of deep learning.
Nir’s take on the future of machine learning—where it’s heading and where it should be heading—can be seen as contrary to the current, prevailing wisdom. In this conversation, he unpacks why he believes in this vision and why it matters for the larger world of technology and progress.
The conversation spanned many topics, and today we want to share with you some of what you can learn by tuning in to the podcast—which we think is worth a listen in its entirety.
What Neurobiology Can Teach Us About Deep Learning
Nir discussed with Ben his research into multicore software and connectomics, a branch of neurobiology. Nir explained how studying the brains of various mammals can help us understand better how to balance compute and memory in a more efficient manner, which is the basis of his founding Neural Magic.
As Nir explains in the podcast, if our brains processed information the same way today’s machine learning products consume computing power, you could fry an egg on your head. If you think about the brain like a circuit board that “lights up” when we need to process a thought, you’d see that only the neurons local to that specific thought would activate—not the entire brain.
In today’s machine learning computing, the entire “brain” is lighting up, which is incredibly inefficient. There’s got to be a better way. Instead of processing a petabyte of compute in a cell phone’s worth of memory (which is happening with today’s machine learning algorithms), we need to flip the script and process a petabyte’s worth of memory in a cell phone’s worth of compute power.
Using the lessons encoded within brain tissue, Nir and the team have been able to make giant strides forward in the efficiency of deep learning.
The Challenges of GPUs and Specialty Hardware for Deep Learning
Nir and Ben also discussed the way today’s machine learning landscape has evolved. Increasingly, there are specialty hardware offerings on the market that are designed specifically to accelerate machine learning.
Nir argues persuasively that it is too early in the lifecycle of machine learning to be building specialty hardware. Moreover, he explains why this is unnecessary: The combination of the right software and CPUs will prove capable of handling many deep learning tasks.
It’s Not Just About Speed (or Price)
It makes sense that, often, conversations about machine learning get hung up on the issue of speed. However, speed is not the only factor at play. The practically “unlimited” memory of CPUs means these machines will be able to unlock larger problems and architectures than GPUs and other specialty hardware devices.
Moreover, Neural Magic’s proprietary algorithms can actually solve machine learning challenges on a CPU just as quickly as a GPU could, as demonstrated here. Memory and locality of reference are more critical to machine learning performance than sheer compute power.
It’s also not just about price point—though that’s another important reason to explore the ability to put CPUs to work for deep learning. It’s also about how efficiently problems can be solved.
If you’d like to listen to the podcast yourself, you can find it here or on your favorite podcast app.