Deep Sparse

A Software Architecture for the Future of ML

Sparsify your deep learning models to minimize footprint & run on CPUs at GPU speeds.
Sparse-Quantized YOLOv5 | 4-Core Lenovo Yoga c940 | Details
Sparse Hugging Face BERT | 24-Core AWS c5.12xlarge | Details
Sparse-Quantized YOLOv3 | 4-Core Lenovo Yoga c940 | Details
Sparse-Quantized ResNet-50 | 24-Core AWS c5.12xlarge CPU | Details

Object Detection

10x Faster
12x Smaller

YOLOv5

GET STARTED

NLP

14x Faster
4.1x Smaller

Hugging Face BERT

GET STARTED

Object Detection

6x Faster
14.5x Smaller

YOLOv3

GET STARTED

Image Classification

7x Faster
6.9x Smaller

ResNet-50

GET STARTED
Sparse-Quantized YOLOv5 | 4-Core Lenovo Yoga c940 | Details
Sparse Hugging Face BERT | 24-Core AWS c5.12xlarge | Details
Sparse-Quantized YOLOv3 | 4-Core Lenovo Yoga c940 | Details
Sparse-Quantized ResNet-50 | 24-Core AWS c5.12xlarge CPU | Details

Passionate about our mission?
Want to have a significant impact in deep learning?
There's likely a place for you here!


Benefits


Unprecedented Performance –– Run models on CPUs at GPU speeds. No special hardware required.
Reduce Costs –– Deploy and scale models on commodity CPU servers from the cloud to the edge.
Smaller Footprint –– Unlock edge possibilities by reducing model footprint by 20x.
Run Anywhere –– Deploy with flexibility on premise, in the cloud, or at the edge.

ComponentsCommunity Edition


Sparsify
Open-source, easy-to-use interface to automatically sparsify and quantize deep learning models for CPUs & GPUs.
SparseML
Open-source libraries and optimization algorithms for CPUs & GPUs, enabling integration with a few lines of code.
SparseZoo
Open-source neural network model repository for highly sparse and sparse-quantized models with matching pruning recipes for CPUs and GPUs.
DeepSparse Engine
Free CPU runtime that runs sparse models at GPU speeds.

Paths to Sparse Acceleration


path-to-acceleration Sparse Model Sparse Model DeepSparse Engine SparseZoo Transfer Learning SparseML Sparsify Your Dense Library Sparse Model Dense Model Sparse Model Dense Model DeepSparse Engine Your Training Process Your Dense Library Dense Model Dense Model DeepSparse Engine Your Training Process
A.) Original Dense Path
Take your dense model & run it in the DeepSparse Engine, without any changes.
B.) SparseZoo Path
Take a pre-optimized model & run it in the DeepSparse Engine, or transfer learn with your data.
C.) Sparsified Path
Sparsify and quantize your dense model with ease & run it in the DeepSparse Engine.
Use the Deep Sparse Platform to Build and Deploy Accurate Deep Learning Models Faster
Using Compound Sparsification for Faster and More Accurate BERT

Endeavor on a more flexible journey of software delivered AI.