NeuralFlix

Tissue vs. Silicon: The Future of Deep Learning Hardware

Presenter: Nir Shavit

If our brains processed information the same way today’s machine learning products consume computing power, you could fry an egg on your head. If you think about the brain like a circuit board that “lights up” when we need to process a thought, you’d see that only the neurons local to that specific thought would activate — not the entire brain. In machine learning computing, the entire “brain” is lighting up, which is incredibly inefficient — not to mention terrible for the environment. There has to be a better way. Instead of processing a petabyte of compute in a cell phone’s worth of memory (which is happening with today’s machine learning algorithms), we need to flip the script and process a petabyte’s worth of memory in a cell phone’s worth of compute power.

Hear from Neural Magic’s CEO and an award-winning professor, Nir Shavit, about what we can learn from his recent research in connectomics so we can apply it to improve today's machine learning hardware.

More ML Research in Action Videos

Apply Second-Order Pruning Algorithms for SOTA Model Compression
Sparse Training of Neural Networks Using AC/DC
How Well Do Sparse Models Transfer?
How to Achieve the Fastest CPU Inference Performance for Object Detection YOLO Models
Workshop: How to Optimize Deep Learning Models for Production
How to Compress Your BERT NLP Models For Very Efficient Inference
Sparsifying YOLOv5 for 10x Better Performance, 12x Smaller File Size, and Cheaper Deployment
Tissue vs. Silicon: The Future of Deep Learning Hardware
Pruning Deep Learning Models for Success in Production

Get more info about

Tissue vs. Silicon: The Future of Deep Learning Hardware