Connect with the
Deep Sparse Community

Our team extends beyond our employee base. Whether it's interacting around feature feedback on forums, incorporating pull requests into our product releases, or swapping sparsification stories at events, we wouldn't be where we are without you. Be it by code or usage, as a newbie or advanced practitioner, we strive to provide a place for people to chat about related topics, such as model deployments across CPUs, the role of sparsity in deep learning models, novel machine learning research, and more.

Deep Sparse Community

Engage with the Deep Sparse Community

Meet like-minded people. Share your expertise. Learn from others.

Join our online communities to interact with our product and engineering teams along with other Neural Magic users and developers interested in model sparsification and accelerating deep learning inference performance.

Install our ML Tools

Prune and quantize your models for inference performance.

Install and use our model sparsification tools
Our open-sourced model spasification tools, Sparsify and SparseML, can be found on PyPI. More information can be found on our Docs website. To see the code, visit our GitHub repo.

Run on CPUs?
If you are interested in running your model inference on CPUs, you can install our DeepSparse Engine from PyPI. More information can be found on our Docs website.

Sparse Model Zoo
Want to get started quickly using one of our optimized models? Our model repo, the SparseZoo, is publicly available.

Getting Support

Our team and community are ready to lend a hand.

DocsGitHubBugs and Feature RequestsReleases
DeepSparse EngineRepoQueueIndex

For more general questions about Neural Magic, please email us at [email protected]

Research Papers

See how Neural Magic contributes to the research community.

Our employees are passionate about contributing research back to the ML community at large. View our conference-approved research papers below.



Learn from Neural Magic team and the community.

Introducing the Deep Sparse Platform

Go in-depth on the Deep Sparse Platform, our open source deep learning sparsification tools and a free CPU inference engine.

Big Brain Burnout: What’s Wrong with AI Computing?

If our brains processed information like today’s ML products consume computing power, you could fry an egg on your head. Learn why we need to rethink how we’re building products that rely on ML and AI.

Pruning Deep Learning Models for Success

Contrary to popular belief, pruning deep learning models is not that hard. Get an overview or pruning, as well as easy ways to prune your deep learning models.

Deep Sparse Community Events

Join the community to learn and have fun, virtually and [soon] in person.

Catch up with your favorite community folks. Ask your toughest questions, meet local users, and walk away with some brand new swag.

Do you have a passion for machine learning performance? Want to join us as an event speaker, blog writer, or event panelist? Have other ideas? We’d love to hear from you. Contact us here.

Community Roadmap

Get insights and provide feedback on our community roadmap

See our community product roadmap, and vote on it, here. We would love to hear from you.


Get the latest ML performance tidbits.

You can get the latest news, webinar and event invites, research papers, and other ML performance tidbits by subscribing to the Neural Magic email communications.

Was this article helpful?