Real-time Recommendation Engine
Recommendation systems predict user preferences by using machine learning to understand past user behavior. For example, eCommerce and retail sites can use real-time recommendations powered by Neural Magic to create fine-tuned personalizations that improve customer loyalty, as well as increase conversion rates and cross-sell/upsell opportunities.
Improving Performance of Machine Learning Recommendations
Today, when machine learning engineers run recommendation models on a CPU, they often make sacrifices that affect the quality of their predictions, reducing their:
- Model Size
- Input Size
Neural Magic addresses these limitations by generating GPU-class performance on a CPU.
Currently supports recommendation models such as Deep Learning Recommendation Models (DLRM), Multilayer Perceptron (MLP) networks and Fully Connected Networks
Neural Magic In the News
Announcing the Neural Magic Inference EngineWe are proud to announce the first version of the Neural Magic Inference Engine, offering GPU-class performance on commodity CPUs.
Neural Magic Announces $15 Million in Seed FundingThe seed investment is led by Comcast Ventures, and including NEA, Andreessen Horowitz, Pillar VC and Amdocs
Try Neural Magic Today
The Neural Magic Inference Engine fits seamlessly into existing CI/CD pipelines, can be deployed in containers or virtual machines, and can be managed with Kubernetes like any modern software application.