The Future of Deep Learning is Sparse.

It all started in Cambridge, Massachusetts.

While mapping the neural connections in the brain at MIT, Neural Magic’s founders Nir Shavit and Alexander Matveev were frustrated with the many limitations imposed by GPUs. Along the way, they stopped to ask themselves a simple question: why is a GPU, or any specialized hardware, required for deep learning?

They knew there had to be a better way. After all, the human brain addresses the computational needs of neural networks by extensively using sparsity to reduce them instead of adding FLOPS to match them.

Based on this observation and years of multicore computing experience, they created novel technologies that sparsify and quantize deep learning networks and allow them to run on commodity CPUs – at GPU speeds and better. Data scientists no longer have to compromise on model design and input size, or deal with scarce and costly GPU resources. Their ground-breaking discovery became the foundation of Neural Magic.



We are looking for talented and ambitious team members to help us shatter the hardware barriers holding back the field of machine learning. Are you ready to challenge the norms?