NeuralFlix

Pruning and Quantizing ML Models With One Shot Without Retraining

Presenter: Konstantin Gulin

Neural Magic's teams have adapted the advanced pruning and quantization methods to work without retraining, using one shot. Our methods result in a meaningful model compression where 60% of the weights can be completely removed and the entire model quantized to INT8. All while recovering 99% of the accuracy! This approach produces a more than 4X speedup and requires only minutes of work. This video summarizes our methods using Computer Vision and NLP examples so you can utilize them in your current work and research.

More neuralflix Videos

Pruning and Quantizing ML Models With One Shot Without Retraining
Apply Second-Order Pruning Algorithms for SOTA Model Compression
Sparse Training of Neural Networks Using AC/DC
How Well Do Sparse Models Transfer?
How to Achieve the Fastest CPU Inference Performance for Object Detection YOLO Models
Workshop: How to Optimize Deep Learning Models for Production
How to Compress Your BERT NLP Models For Very Efficient Inference
Sparsifying YOLOv5 for 10x Better Performance, 12x Smaller File Size, and Cheaper Deployment
Tissue vs. Silicon: The Future of Deep Learning Hardware
YOLOv5 on CPUs: Sparsifying to Achieve GPU-Level Performance and Tiny Footprint
YOLOv3 on the Edge: DeepSparse Engine vs. PyTorch
State-of-the-Art NLP Compression Research in Action: Understanding Crypto Sentiment
3.5x Faster NLP BERT Using a Sparsity-Aware Inference Engine on AMD Milan-X
Pruning Deep Learning Models for Success in Production
Accelerate NLP Tasks With Sparsity and the DeepSparse Runtime
Accelerate Image Classification Tasks With Sparsity and the DeepSparse Runtime
Accelerate Image Segmentation Tasks With Sparsity and the DeepSparse Runtime
Accelerate Object Detection Tasks With Sparsity and the DeepSparse Runtime
Intro to SparseZoo
Intro to SparseML
Intro to DeepSparse Runtime
Intro to Neural Magic & Software-Delivered AI
Intro to Deep Learning Model Sparsification

Get more info about

Pruning and Quantizing ML Models With One Shot Without Retraining