Asynchronous Decentralized SGD with Quantized and Local Updates (NeurIPS 2021)

Decentralized optimization is emerging as a viable alternative for scalable distributed machine learning, but also introduces new challenges in terms of synchronization costs. To this end, several communication-reduction techniques, such as non-blocking communication, quantization, and local steps, have been explored in
the decentralized setting. Due to the complexity of analyzing optimization in such a relaxed setting, this line of work often assumes global communication rounds, which require additional synchronization. In this paper, we consider decentralized optimization in the simpler, but harder to analyze, asynchronous gossip model, in which communication occurs in discrete, randomly chosen pairings among nodes.

Perhaps surprisingly, we show that a variant of SGD called SwarmSGD still converges in this setting, even if non-blocking communication, quantization, and local steps are all applied in conjunction, and even if the node data distributions and underlying graph topology are both heterogenous. Our analysis is based on a new connection with multi-dimensional load-balancing processes. We implement this algorithm and deploy it in a super-computing environment, showing that it can outperform previous decentralized methods in terms of end-to-end training time, and that it can even rival carefully-tuned large-batch SGD for certain tasks.

Was this article helpful?