On-Demand Discussion: Using Sparsification Recipes with PyTorch
Sparse models1 are the future of deep learning. They require less footprint and are able to run more efficiently on commodity CPUs2. But popular belief is that achieving sparsity is hard. In this on-demand discussion, we used a PyTorch example to show that it doesn’t have to be difficult when using sparsification recipes.
In the video above, Benjamin Fineran, Neural Magic's Sr. ML Engineer, demoed how you can apply ready-to-use sparsification recipes to prune and quantize deep learning models. Benjamin:
- Discussed the benefits of using recipes to sparsify your deep learning models
- Showed how you can easily sparsify PyTorch models within your existing training flows
- Demoed speedups on CPUs that result from model sparsification
Date recorded: May 19, 2021
Speaker: Benjamin Fineran, Sr. ML Engineer, Neural Magic
1Sparse models = pruned and quantized models
2See our conference-approved research papers to learn about the impact of sparsity on deep learning performance.
Was this article helpful?
YesNo