Join the DeepSparse ARM Waitlist

Neural Magic is bringing performant deep learning inference to ARM CPUs. In our recent 1.3 product release, we launched alpha support for DeepSparse on AWS Graviton. We are working towards a general release across ARM server, embedded, and mobile platforms in 2023. By joining the waitlist via the form below, you can get early access to the general release.

Why DeepSparse on ARM?

DeepSparse is an inference runtime with GPU-class performance on CPUs. With DeepSparse, deep learning deployments benefit from the scalability and flexibility of software running on commodity hardware while meeting the performance demands of production, enabling you to simplify your operations and reduce your infrastructure costs. With support for ARM, DeepSparse will run on all the major CPU vendors from Intel to AMD to ARM. This will enable you to run inference on lower-cost cloud instances with ARM cores such as AWS Graviton or GCP T2A, on edge devices such as Raspberry Pi, and mobile devices such as Android, providing a consistent deployment platform from cloud to edge.

Sign Up

Willing to Provide Feedback?

Neural Magic is offering a $50 Amazon gift card to those who are willing to spend at least 30 minutes discussing either their experience with the ARM alpha or feature requests as we approach general availability. Reach out at https://neuralmagic.com/contact/ to set up time.

Was this article helpful?
YesNo