Neural Magic 1.4 Product Release
We are excited to announce the Neural Magic 1.4 product release. This milestone contains new product features, an improved user experience, and stability enhancements that will simplify the ability for our clients to achieve GPU-class performance on commodity CPUs.
NEW – Introducing Sparsify BETA
Experience driven tooling to simplify the process of analyzing and optimizing deep learning models for performance – without sacrificing accuracy for business outcomes – through an interactive and GUI based design. Users have the ability to leverage industry leading techniques in the field of model compression, pruning, and transfer learning codified in simple easy-to-use recipes that can be used in tandem with the SparseZoo or user private models.
Simplify time to value and reduce skill burden to build performant deep learning models by having a collection of pre-trained, performance-optimized deep learning models to prototype from. The repository consists of popular image classification and object detection models and is constantly growing.
Performant model additions:
- YOLOv3 (COCO)
- ResNet-50-SSD-300 (VOC, COCO)
- MobileNetv2-SSDLite (VOC, COCO)
NM Inference Engine
Enables clients to run mission critical deep learning models on commodity CPUs to reduce cost per inferences and generate price-performant deployments. This feature set includes the inference engine, ONNX conversion tooling, model server if needed, and is focused on model deployment and scaling machine learning pipelines.
- Quantized (int8) AVX-512 convolution support for ResNet-50 VNNI
- Quantized (int8) support for depthwise convolutions
- Benchmarking API enhancements for ease of use
Enables data scientists to optimize their model for performance without having to sacrifice accuracy required for business outcomes. This feature set includes model pruning APIs and CLIs as well as transfer learning APIs and CLIs, simplifying the process of achieving performance on deep learning models with Neural Magic.
- Support for PyTorch 1.7
- Quantized Aware Training and ONNX model export in PyTorch
- Keras exporter for ONNX
- Object Detection end-to-end install and benchmark notebooks added
For more details, check our in-depth release notes.