[On-Demand Webinar] Big Brain Burnout: What’s Wrong with AI Computing?
If our brains processed information the same way today’s machine learning products consume computing power, you could fry an egg on your head.
If you think about the brain like a circuit board that “lights up” when we need to process a thought, you’d see that only the neurons local to that specific thought would activate -- not the entire brain.
In machine learning computing, the entire “brain” is lighting up, which is incredibly inefficient -- not to mention terrible for the environment. There’s got to be a better way.
Instead of processing a petabyte of compute in a cell phone’s worth of memory (which is happening with today’s machine learning algorithms), we need to flip the script and process a petabyte’s worth of memory in a cell phone’s worth of compute power.
Hear from Neural Magic's co-founder and award-winning professor Nir Shavit why:
- Memory and locality of reference are more critical to machine learning performance than compute power
- We need to fundamentally rethink how we’re building products that rely on machine learning and AI -- it’s about memory, not raw compute power
- Looking to the human brain as an inspiration, we can reconfigure AI systems to be more efficient (both performance-wise and environmentally)
Speaker bio: Nir Shavit is the CEO of Neural Magic, a machine learning startup that is currently in stealth mode, and is a professor in the Department of Electrical Engineering and Computer Science at MIT. Shavit is a co-author of the book The Art of Multiprocessor Programming, is a recipient of the 2004 Gödel Prize in theoretical computer science and of the 2012 Dijkstra Prize in Distributed Computing, and is an ACM fellow. His recent interests include systems issues in machine learning and techniques for understanding how neural tissue computes by extracting connectivity maps of neural tissue, a field called connectomics.
Date recorded: September 1, 2020
Time: 1:00pm ET
Presenter: Nir Shavit, CEO Neural Magic