|
This blog was originally posted by Na Zhang on VMware's Office of the CTO Blog. You can see the original copy here. Increasingly large deep learning (DL) models require a significant amount of computing, memory, and energy, all of which become a bottleneck in real-time inference where resources are limited. In this post, we detail our… Read More Accelerating Machine Learning Inference on CPU with VMware vSphere and Neural Magic
|
We are excited to announce the Neural Magic January 2021 product release. This milestone contains new product features, an improved user experience, and stability enhancements that will simplify the ability for our clients to achieve GPU-class performance on commodity CPUs. NEW - Introducing Sparsify BETA Experience driven tooling to simplify the process of analyzing and… Read More Neural Magic January 2021 Product Release
|
Release 0.1.0 for the Community! February 4, 2021 As of February 2021, our products have been renamed, most have been open sourced and their release notes can be be found in GitHub! Sparsify SparseML (formerly Neural Magic ML Tooling) SparseZoo (formerly Neural Magic Model Repo) DeepSparse Engine (formerly Neural Magic Inference Engine) Release 1.4.0 January… Read More Product Release Notes
|
Run computer vision models at lower cost with a suite of new tools that simplify model performance. Today, Neural Magic is announcing the release of its Inference Engine software, the NM Model Repo, and our ML Tooling. Now, data science teams can run computer vision models in production on commodity CPUs – at a fraction… Read More Neural Magic Launches High-Performance Inference Engine and Tool Suite for CPUs