Neural Magic delivers best-in-class deep learning performance on commodity CPUs. We do this via: Model optimization techniques like pruning and quantization Smart algorithms that utilize CPU memory more effectively. To help visualize the power of Neural Magic, we recorded three short end-to-end video guides on how to install our software, prepare and run a model… Read More Neural Magic End-to-End Demo Videos
Category: Uncategorized
Part 3: Gradual Magnitude Pruning (GMP) Hyperparameters
TL;DR: To facilitate the GMP process when pruning a network, several hyperparameters must be defined. These include general hyperparameters such as learning rate, pruning update frequency, and pruning schedule function in addition to the sparsity per layer. All hyperparameters affect end level recovery, loss, and performance. Reading time: 5 minutes, 5 seconds Welcome to Part… Read More Part 3: Gradual Magnitude Pruning (GMP) Hyperparameters
Profile: Jeannie Finks
Jeannie Finks, Head of Customer Success at Neural Magic, has over 25 years of experience spanning customer success, digital strategy & implementation, and technical program leadership. Prior to joining Neural Magic Jeannie held numerous hands-on customer success roles at Acquia, a SaaS company whose enterprise products, services, and technical support focus on the open-source CMS… Read More Profile: Jeannie Finks
Challenging Memory Requirements and Performance Standards in ML
Everything we know about memory requirements in machine learning may be wrong. Today, when data scientists process deep learning models using a “throughput computing” device like a GPU, TPU, or similar hardware accelerator, they’re likely faced with a decision to shrink their model or input size to fit within the device’s memory limitations. Training a… Read More Challenging Memory Requirements and Performance Standards in ML