NeuralFlix

How Well Do Sparse Models Transfer?

Presenter: Mark Kurtz

In this webinar recording, we summarize our 2022 CVPR-accepted "How Well Do Sparse Imagenet Models Transfer" paper and our 2022 ENLP-accepted "The Optimal BERT Surgeon" paper. We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities, and, while doing so, can lead to significant inference and even training speedups.

More ML Research in Action Videos

Apply Second-Order Pruning Algorithms for SOTA Model Compression
Sparse Training of Neural Networks Using AC/DC
How Well Do Sparse Models Transfer?
How to Achieve the Fastest CPU Inference Performance for Object Detection YOLO Models
Workshop: How to Optimize Deep Learning Models for Production
How to Compress Your BERT NLP Models For Very Efficient Inference
Sparsifying YOLOv5 for 10x Better Performance, 12x Smaller File Size, and Cheaper Deployment
Tissue vs. Silicon: The Future of Deep Learning Hardware
Pruning Deep Learning Models for Success in Production

Get more info about

How Well Do Sparse Models Transfer?