Connect with the
ML Performance Community

Our team extends beyond our employee base. Whether it’s interacting around feature feedback on forums, incorporating pull requests into our product releases, or swapping sparsification stories at events, we wouldn’t be where we are without you. Share your expertise and learn from others by joining the ML Performance Community discussions on Discourse and Slack.

Install our ML Tools

Prune and quantize your models for inference performance.

Install and use our model sparsification tools
Our open-sourced model spasification tools, Sparsify and SparseML, can be found on PyPI. More information can be found on our Docs website. To see the code, visit our GitHub repo.

Run on CPUs?
If you are interested in running your model inference on CPUs, you can install our DeepSparse Engine from PyPI. More information can be found on our Docs website.

Sparse Model Zoo
Want to get started quickly using one of our optimized models? Our model repo, the SparseZoo, is publicly available.

Getting Support

Our team and community are here for you.

For detailed documentation, visit our docs page. 

For general help or questions, use our GitHub discussions. Everyone is welcome!

Discussions for Sparsify
Discussions for SparseML
Discussions for SparseZoo
Discussions for DeepSparse Engine 

For bugs or feature requests, use its repo issue tracker:

Issue tracker for Sparsify
Issue tracker for SparseML

Issue tracker for SparseZoo
Issue tracker for DeepSparse Engine

For more general questions about Neural Magic, please email us at [email protected]

Engage with the ML performance Community

Meet like-minded people. Share your ML performance expertise. Learn from others.

Join our online communities to interact with our product and engineering teams along with other Neural Magic users and data scientists interested in simplifying and accelerating ML performance.

Join the conversation today:

Research Papers

See how Neural Magic contributes to the research community.

Our employees are passionate about contributing research back to the ML community at large. View our conference-approved research papers below.

More

Videos

Learn from Neural Magic team and the community.

Introducing the Deep Sparse Platform

Go in-depth on the Deep Sparse Platform, our open source deep learning sparsification tools and a free CPU inference engine.

Big Brain Burnout: What’s Wrong with AI Computing?

If our brains processed information like today’s ML products consume computing power, you could fry an egg on your head. Learn why we need to rethink how we’re building products that rely on ML and AI.

Pruning Deep Learning Models for Success

Contrary to popular belief, pruning deep learning models is not that hard. Get an overview or pruning, as well as easy ways to prune your deep learning models.

Events

Join the community to learn and have fun, virtually and [soon] in person.

Catch up with your favorite community folks. Ask your toughest questions, meet local users, and walk away with some brand new swag.

Do you have a passion for machine learning performance? Want to join us as an event speaker, blog writer, or event panelist? Have other ideas? We’d love to hear from you. Contact us here.

Community Roadmap

Get insights and provide feedback on our community roadmap

See our community product roadmap, and vote on it, here. We would love to hear from you.

Subscribe

Get the latest ML performance tidbits.

You can get the latest news, webinar and event invites, research papers, and other ML performance tidbits by subscribing to the Neural Magic community email communications.