Products

BERT-Large: Prune Once for DistilBERT Inference Performance

Jul 15, 2022

Icon

Author(s)

Compress BERT-Large with pruning and quantization to create a version that maintains accuracy while beating baseline DistilBERT performance and compression

Comparison of throughput performance for BERT-Large, Prune OFA BERT-Large, BERT-base, and DistilBERT with the DeepSparse Engine on a c6i.12xlarnge.
Comparison of throughput performance for BERT-Large, Prune OFA BERT-Large, BERT-base, and DistilBERT with the DeepSparse Engine on a c6i.12xlarge.

In 2019, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, a research paper from Google research, introduced two versions of a transformative new NLP model: BERT-base and BERT-Large. Both were transformer-based architectures pre-trained on a relatively sizable text corpus, yet the base model has seen much greater adoption despite BERT-Large's higher accuracy. Looking at Hugging Face downloads alone, BERT-base receives roughly 15 times more with over 15 million downloads in May of 2022. The reasons underlying this trend become clearer as soon as you start working with the models – unoptimized BERT-Large inferences are 4.5x slower and require over 1GB in disk space. For an already computationally expensive NLP approach, the extra accuracy from BERT-Large generally doesn’t justify the additional expense.

Given BERT-Large’s sizable increase over the baseline model, an intriguing question arises: how many of those parameters are required for natural language processing? Intel’s research team dove into this question in Prune Once for All: Sparse Pre-Trained Language Models. For BERT-Large, they found that 90% of the network could be pruned away with minimal effect on the accuracy. To achieve this, pruning and distillation were combined while pre-training to create general, sparse architectures that are finetuned and quantized onto datasets for standard tasks such as SQuAD for question answering. The results are highly compressed networks that closely match the accuracy of the unoptimized versions. Intel has released the Prune OFA models on the Hugging Face model repository as part of their research.

Figure 1 from Intel’s Prune Once for All paper detailing the methods used for creating the sparse BERT-Large models.
Figure 1 from Intel’s Prune Once for All paper detailing the methods used for creating the sparse BERT-Large models.

Deploy with DeepSparse for a 11x Increase in Performance

Note, the results reported below are for the SQuAD dataset. However, these results carry over to other datasets and tasks that can be found on the SparseZoo.

The freely available DeepSparse Engine is specifically engineered to speed up sparse and sparse-quantized networks. It leverages sparsity to reduce the overall compute and exploit the CPU's large caches for faster memory access. The combination of these two technologies leads to GPU-class performance on commodity CPUs. Combining DeepSparse with the Prune Once for All sparse-quantized models yields 11x better performance in throughput and 8x better performance for latency-based applications, beating BERT-base and achieving DistilBERT level performance without sacrificing accuracy.

Comparison of latency performance and F1 scores on SQuAD for Prune OFA BERT-Large, BERT-Large, BERT-base, and DistilBERT with the DeepSparse engine on a c6i.12xlarge
Comparison of latency performance and F1 scores on SQuAD for Prune OFA BERT-Large, BERT-Large, BERT-base, and DistilBERT with the DeepSparse Engine on a c6i.12xlarge

The graph above highlights the relationship between networks for scaling their structured size vs. sparsifying them to remove redundancies. The performant DistilBERT model has the least number of layers and channels and the lowest accuracy. With more layers and channels added, BERT-base is less performant and more accurate. Finally, BERT-Large is the most accurate with the largest size but the slowest inference. Despite the reduced number of parameters, the sparse-quantized BERT-Large is close in accuracy to the dense version and inferences 8x faster. So, while the larger optimization space helped when training, not all of these pathways were necessary to maintain accuracy. The redundancies in these larger networks surface even more when comparing the file sizes necessary to store these models, as shown in the graph below.

Comparison of compression ratios for gzipped ONNX files for Prune OFA BERT-Large, BERT-base, and DistilBERT as compared to a dense, FP32 BERT-Large.
Comparison of compression ratios for gzipped ONNX files for Prune OFA BERT-Large, BERT-base, and DistilBERT as compared to a dense, FP32 BERT-Large.

Apply Your Data to these Results

Neural Magic has partnered with Intel’s research team to open-source the algorithms, models, recipes, and code so you can leverage this research for your own data and deployments. The algorithms and code are available in SparseML – install and start using command-line integrations immediately in your terminal. The models and recipes are available on SparseZoo – where the recipes encode all of the hyperparameters necessary to replicate and transfer this research onto your data. 

As an example, follow the steps below to replicate the results published above for SQuAD for training and deployment.

1. Install SparseML and DeepSparse

pip install sparseml[torch] deepsparse[server]

2. Train the Sparse Model with a Teacher

sparseml.transformers.train.question_answering \
  --output_dir bert_large_uncased-squad \
  --model_name_or_path zoo:bert-large-wikipedia_bookcorpus-pruned80.4block_quantized \
  --distill_teacher zoo:nlp/question_answering/bert-large/pytorch/huggingface/squad/base-none \
  --recipe zoo:nlp/question_answering/bert-large/pytorch/huggingface/squad/pruned80_quant-none-vnni \
  --dataset_name squad \
  --do_train \
  --do_eval \
  --per_device_train_batch_size 8 \
  --per_device_eval_batch_size 32 \
  --gradient_accumulation_steps 4 \
  --preprocessing_num_workers 6 \
  --max_seq_length 384 \
  --doc_stride 128

3. Deploy in the DeepSparse Server

deepsparse.server --task question_answering --batch_size 1 --model_path bert_large_uncased-squad

Credits

This blog was made in conjunction with Ofir Zafrir and Guy Boudoukh from Intel Labs and special thanks to them for their great comments and collaboration.

Was this article helpful?
YesNo
Icon

Author(s)

Icon

Join the Conversation