Neural Magic, Cisco, and Intel Collaborate to Accelerate Deep Learning Performance

|

Read Time: 2 min

Collaboration

We are excited to announce a joint collaboration between Neural Magic, Cisco, and Intel to accelerate deep learning performance.

Today, Enterprises struggle with the process of getting trained machine learning models into production in support of their mission critical business applications and subsequent inference needs. Too often, sacrifices and trade-offs are made due to the specialized hardware resources running these machine learning pipelines – ranging from performance, accuracy, to flexibility and cost. You can read more about this in a recent deep learning survey we published here.

Neural Magic in collaboration with Cisco and Intel are looking to change this dynamic and offer a simplified solution to enterprises struggling with machine learning inference. The Neural Magic Inference Engine, a software accelerator, when running on Cisco UCS or HyperFlex platforms with 2nd Generation Intel Xeon processors, creates a dynamic deep learning environment optimized to run performance computer vision applications without the need for specialized hardware accelerators.  

"We're excited to be working with Neural Magic on innovative ways to implement deep learning AI on Intel processors" - Justin Cohen, Innovation Architect, Cisco Toronto Innovation Center

Clients can achieve and realize:

1. GPU class performance on commodity CPUs without having to sacrifice accuracy
2. Flexibility to deploy on-prem, in the cloud, or at the edge via the CI/CD pipelines they have in place
3. Price performant inferences that can reduce deployment total cost of ownership.

As an example, Cisco was able to see 5-10X performance benefits within their Toronto Innovation Centre utilizing the Neural Magic Inference Engine and pre-trained models when compared to a non-optimized CPU deep learning model.

To learn more about how you can unlock the deep learning potential in your industry, schedule a live demo.