Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training in Neural Networks (Survey Paper, 2021)
The future of deep learning is sparse! See our overview of the field and upcoming opportunities for how to gain 10-100x performance to fuel the next AI revolution.
Our research report shows that today’s sparsification methods can lead to a 10-100x reduction in model size, and to corresponding theoretical gains in computational, storage, and energy efficiency, all without significant loss of accuracy.
If those speedups are realized in efficient hardware implementations, then the gained performance may lead to a phase change in enabling more complex and possibly revolutionary tasks to be solved practically.
Furthermore, we observe that the pace of progress in sparsification methods is accelerating, such that even during the last months while we worked on this report, several new methods that improve upon the state-of-the-art have been published.