There are three types of models in the Neural Magic Model Repo:

  1. Base: The baseline model, trained generally as in the original paper. No Neural Magic optimizations have been applied to base models.
  2. Recal: A recalibrated model that is optimized to the point of fully recovering the baseline model’s metrics. Yields better performance than base, without any impact on Top1 accuracy.
  3. Recal-perf: A recalibrated model that is optimized for performance to the point of recovering 99% of the baseline model’s metrics. Yields even better performance than base and recalibrated models, but drops one accuracy point.

There are three types of models in the Neural Magic Model Repo:

  1. Base: The baseline model, trained generally as in the original paper. No Neural Magic optimizations have been applied to base models.
  2. Recal: A recalibrated model that is optimized to the point of fully recovering the baseline model’s metrics. Yields better performance than base, without any impact on Top1 accuracy.
  3. Recal-perf: A recalibrated model that is optimized for performance to the point of recovering 99% of the baseline model’s metrics. Yields even better performance than base and recalibrated models, but drops one accuracy point.

There are three types of models in the Neural Magic Model Repo:

  1. Base: The baseline model, trained generally as in the original paper. No Neural Magic optimizations have been applied to base models.
  2. Recal: A recalibrated model that is optimized to the point of fully recovering the baseline model’s metrics. Yields better performance than base, without any impact on Top1 accuracy.
  3. Recal-perf: A recalibrated model that is optimized for performance to the point of recovering 99% of the baseline model’s metrics. Yields even better performance than base and recalibrated models, but drops one accuracy point.

There are three types of models in the Neural Magic Model Repo:

  1. Base: The baseline model, trained generally as in the original paper. No Neural Magic optimizations have been applied to base models.
  2. Recal: A recalibrated model that is optimized to the point of fully recovering the baseline model’s metrics. Yields better performance than base, without any impact on Top1 accuracy.
  3. Recal-perf: A recalibrated model that is optimized for performance to the point of recovering 99% of the baseline model’s metrics. Yields even better performance than base and recalibrated models, but drops one accuracy point.

Start Your Neural Magic Experience

Welcome!

Neural Magic brings deep learning performance to everyday CPUs.

We do this via proprietary:
1. Model optimization techniques like pruning and quantization
2. Smart algorithms that utilize CPU memory more effectively


This is a simplified, web-based simulation that will help you visualize the power of Neural Magic. A hands-on, integrated experience in your own environment is to follow.

In this simulation, we will:

1. Determine your deep learning use case
2. Select a [performance-tuned] model from the Neural Magic Model Repo
3. Benchmark the model
4. Share results and propose next steps

What is your Deep Learning (DL) use case?

Neural Magic focuses on computer vision use cases, specifically image classification and object detection. Support for others is on our roadmap.
Image Classification
Object Detection
Other

Choose your Image Classification Model:

Neural Magic Model Repo contains models optimized with latest techniques, ready to be run in the Neural Magic Inference Engine. Here’s a list of all the models you can find there.

Unlike other repositories, Neural Magic already did the hard work of building, pruning, and re-training the models for immediate use in production.

Pro tip: Image classification models in our repo have been trained on the ImageNet dataset. You can use our transfer learning API to make our optimizations work with your own data.

Choose your object detection model:

Neural Magic Model Repo contains models optimized with latest techniques, ready to be run in the Neural Magic Inference Engine. Here’s a list of all the models you can find there.

Unlike other repositories, Neural Magic already did the hard work of building, pruning, and re-training the models for immediate use in production.

Pro tip: Object detection models in our repo have been trained on the COCO dataset. You can use our transfer learning API to make our optimizations work with your own data.

Pick a specific model:

Let's put you in the driver's seat.

CPU Cores
4
18
Batch Size
1
64
Results

CPU Cores
4
8
Batch Size
1
64
With these settings, Neural Magic is up to 6x faster than other engines.
We deliver performance and cost savings through state-of-art optimization techniques and an engine that exploits both CPU and network architecture in a much better way. Check out our pruning guide, or schedule a session to see what’s under the engine's hood.
Batch Size
NM
DNNL
CPU
images / second
Setup: AWS C5 Instances, Fp32
Savings Cost Per Inference, b1
DNNL
CPU
Percent Savings with Neural Magic

Congrats on initiating your Neural Magic experience!

We selected a use case, picked a model, and benchmarked its performance running in the Neural Magic Inference Engine.

For next steps, we'd love to give you access to our ML tools, APIs, and scripts so you can benchmark Neural Magic in your environment, on your time.
Benchmark Neural Magic in your own environment
Pick another use case