Sparsify is an open source solution with an easy-to-use interface to prune and quantize deep learning models. It allows for easy model hyperparamater tweaking to increase performance and decrease footprint, all while providing fine-grain controls over loss recovery.
Sparsify makes model optimizations simple. With Sparsify, you are able to upload a model and:
- Analyze: Visualize possible performance and accuracy gains
- Optimize: Modify existing model(s) with fine-grain controls to achieve desired results
- Integrate: Export configurations (aka. “optimization recipes”) to quickly retrain model for deployment, with only a few lines of code.
View the video on the to see both Sparsify and Neural Magic Inference Engine, our model compression and sparse execution software, respectively, in action.
Date recorded: December 17, 2020
Presenters: Gaurav Rao, Head of Product & Benjamin Fineran, Machine Learning Engineer