Neural Magic is excited to announce and share highlights of the 1.2 release of our DeepSparse and SparseML libraries. The full technical release notes are always available within our GitHub release indexes linked from the specific Neural Magic repository. If you have any questions, need assistance, or simply want to say hello to our vibrant ML performance community, join us in the Deep Sparse Community Slack.
What's New in the Neural Magic 1.2 Product Release
We've consolidated our Docs content for our various products into a more centralized experience. From getting started material to tutorials to product features, our new Docs website has you covered for getting started and getting value from Neural Magic! In this release, we’ve also created new pipelines to support document classification use cases, natively. Additionally, we’ve refactored our transformers training and export integration code, enabling reuse across use cases. We have added a deployment folder for our image classification integration which will export easily for deployment. Lastly, the dynamic batch was implemented to be more generic to support any deployment pipeline.
- Minimum Python version changed to 3.7 as 3.6 reached EOL
- Protobuf version for ONNX 1.12 compatibility pinned to prevent installation failures on some systems
- Performance improvements for unstructured sparse-quantized convolutional neural networks implemented for throughput use cases
To review the full release notes, see https://github.com/neuralmagic/deepsparse/releases
DeepSparse Enterprise 90-Day Trial
In addition to our 1.2 DeepSparse Community release, we have also released a 1.2 DeepSparse Enterprise that you can try now in production for 90 days, for free!
What's Coming Soon
We are excited to announce that our engineering team has made great progress on ARM support in DeepSparse Engine, targeting AWS Graviton3. Nightly releases are on the roadmap to test before the end of November! We are seeking beta testers for our initial implementations of DeepSparse running on ARM. If you are interested in joining our beta program, contact Rob Greenberg via DM on our Community Slack.
Since late August, we’ve made a ton of progress on our Sparsify APIs and they are set to release as an open alpha before the end of November.
Sparsify provides pathways to grab or create sparse models that meet your business needs. Have a generic NLP sentiment analysis task or an object detection problem with a class contained in the COCO dataset? Get started with the sparsify.package API to deliver a sparse model ready to tackle your ML task right out of the gate. Do you already have a complete dataset and a business case that you are trying to solve, but just can’t get a model that meets your accuracy needs or performance requirements? Try out the sparsify.auto API where you just need to feed in your dataset, use case, and optional target optimization metrics, retrain on your infrastructure, and the sparsify.auto API will generate a sparse-transfer learned model for your use case on your dataset. Additionally, we are working on a One-Shot API to help go from a dense model to a sparse model WITHOUT having to kick off a full retraining pipeline which saves a ton of time.
If this new product sounds interesting to you, we are looking for early testers! Contact us via Slack to hear first about the open alpha release of our Sparsify APIs in the coming weeks.
Until next time,
Rob Greenberg from Neural Magic’s Product Team