Product Release Notes

vLLM Release Roundup: What’s New in vLLM v0.6.4?

Nov 25, 2024

Icon

Author(s)

Introduction

The vLLM project continues to push the boundaries of open-source inference with every release. As a leading inference server for large language models, vLLM combines performance, flexibility, and community-driven innovation. In this blog, we explore the latest updates in v0.6.4, highlighting key improvements, bug fixes, and contributions from the community. From expanded model support to improved hardware compatibility, these releases make deploying state-of-the-art LLMs faster, more efficient, and more accessible than ever.

Highlights of v0.6.4 Release

The v0.6.4 release marks a significant step forward for the vLLM project, focusing on foundational advancements that pave the way for the V1 engine and the rollout of torch.compile support. While these features are still works in progress, they demonstrate vLLM’s commitment to cutting-edge innovation and long-term scalability. 

Progress Toward the V1 Engine

This release includes substantial progress in the V1 engine core refactor, laying the groundwork for a more modular and efficient architecture. Key updates involve core improvements and architectural changes designed to enhance scalability, flexibility, and maintainability. You can explore the design and long-term plans for the V1 engine in our recent meetup slides (starting on slide 17).

Advancements in torch.compile Support

The release also makes significant strides in torch.compile integration, with many models now supporting this feature via TorchInductor. This progress enhances model performance and usability, especially for users optimizing deployments across diverse hardware backends. Learn more about the torch.compile roadmap in our recent meetup slides (starting on slide 32)

New Features and Enhancements

New Model Support in vLLM v0.6.4

The v0.6.4 release brings significant expansions to model support, making vLLM a versatile platform for a broader range of large language models (LLMs), vision-language models (VLMs), and specialized tasks. These updates include:

  • New LLMs and VLMs: vLLM now supports cutting-edge models such as Idefics3, H2OVL-Mississippi, and Qwen2-Audio, alongside Pixtral models, including Pixtral-Large, in the Hugging Face Transformers format. vLLM remains the best place for running Pixtral Large models efficiently.FalconMamba and the Florence-2 language backbone expand vLLM's capabilities for multi-modal tasks and language understanding, making it easier to deploy models optimized for diverse real-world applications.
  • New Encoder-Decoder Embedding Models: Popular encoder-decoder models like BERT, RoBERTa, and XLM-RoBERTa have been integrated, allowing users to leverage these models for embedding tasks. These additions make vLLM suitable for workflows involving natural language understanding, semantic similarity, and classification.
  • Expanded Task Support: vLLM now supports task-specific use cases, such as Llama embeddings, Math-Shepherd for Mistral reward modeling, and Qwen2 classification. Advanced embedding models like VLM2Vec (Phi-3-Vision embeddings), E5-V (LLaVA-NeXT embeddings), and Qwen2-VL embeddings further enhance the platform’s ability to handle vision-language and hybrid tasks seamlessly. To simplify workflows, a user-configurable --task generate/embedding parameter has been introduced for models that support both generation and embeddings, along with a new OpenAI-style Chat Embeddings API.
  • Enhanced Tool-Calling and Fine-Tuning Capabilities: The release includes a tool-calling parser for models like Granite 3.0 and Jamba, as well as LoRA support for models such as Granite 3.0 MoE, Idefics3, Qwen2-VL, and Llama embeddings. These improvements enable easier adaptation and fine-tuning of models for custom applications.
  • Quantization and Processing Improvements: Support for bitsandbytes (BNB) quantization has been added for models like Idefics3, Mllama, Qwen2, and MiniCPMV, enhancing performance in resource-constrained environments. Additionally, a unified multi-modal processor for VLMs simplifies the handling of vision-language data, while an updated model interface streamlines development.

With these comprehensive updates, vLLM continues to cater to both developers and researchers, enabling high-performance deployments across a growing array of tasks, models, and modalities. These enhancements underscore vLLM's commitment to staying at the forefront of AI inference innovation.

Expanded Hardware Support in vLLM v0.6.4

The vLLM v0.6.4 release introduces significant advancements in hardware support, broadening the range of devices optimized for high-performance inference. These updates reflect vLLM's commitment to making state-of-the-art model serving accessible across diverse hardware configurations, from CPUs to accelerators like TPUs and HPUs.

  • Intel Gaudi (HPU) Backend: For the first time, vLLM adds inference support for Intel's Gaudi processors, leveraging Intel’s unique architecture to deliver efficient high-performance computing for machine learning workloads. This enhancement broadens the hardware ecosystem, offering users an alternative to GPUs for large-scale model deployments.
  • Enhanced CPU Backend: vLLM continues to improve support for commodity hardware by adding embedding model compatibility to the CPU backend. This update ensures that users deploying on CPUs can now handle a wider range of tasks, making inference more accessible and cost-effective without requiring specialized hardware.
  • Optimized TPU Performance: Tensor Processing Unit (TPU) support sees crucial upgrades, including accurate peak memory usage profiling and an upgraded PyTorch XLA integration. These improvements provide better resource management and stability, enabling users to push the limits of performance on TPU clusters.
  • Triton for FP8 and INT8 Support: The Triton inference backend now includes a new implementation for scaled matrix multiplication (scaled_mm_triton) to support FP8 and INT8 quantization with SmoothQuant. This enhancement specifically enables AMD GPUs to support more forms of quantization.

These updates empower users to maximize inference efficiency on their hardware of choice, from high-end accelerators to cost-effective CPU solutions. With these advancements, vLLM continues to lead the way in delivering flexible and performant solutions for large-scale AI workloads.

Performance Enhancements in vLLM v0.6.4

The vLLM v0.6.4 release introduces targeted performance improvements that optimize inference efficiency and scalability. These updates combine innovative techniques to reduce latency and maximize throughput, ensuring vLLM remains the top choice for high-performance model serving.

  • Chunked Prefill with Speculative Decoding: The release combines chunked prefill—a technique for breaking down input token processing into manageable chunks—with speculative decoding, a method for pre-emptively generating tokens based on predictions. This powerful combination improves inter-token latency, particularly for memory-bound workloads, enabling faster responses without sacrificing accuracy. The integration demonstrates vLLM's focus on delivering high performance even under resource constraints, making it ideal for real-time applications.
  • Fused MoE Performance Improvements: Performance for models using mixture-of-experts (MoE) architectures has been significantly improved through optimized fusion techniques. This update enhances execution efficiency for expert-layer computations, reducing overhead and boosting throughput for tasks leveraging large-scale MoE models.

Other Improvements in vLLM v0.6.4

Beyond core enhancements, vLLM v0.6.4 includes a variety of improvements aimed at refining the development experience, enhancing testing and documentation, and expanding benchmarking capabilities. These updates contribute to a smoother and more efficient user and contributor experience.

  • Improved Pull Request Workflow: The pull request process has been enhanced with tools like DCO (Developer Certificate of Origin), Mergify for automated merging, and a stale bot to manage inactive PRs. These additions streamline contributions, ensuring a more organized and efficient process for both maintainers and contributors.
  • Dropped Python 3.8 Support: With this release, vLLM officially ends support for Python 3.8. This decision reflects the project's focus on leveraging newer Python features and maintaining compatibility with actively supported Python versions. Users are encouraged to upgrade to Python 3.9 or later for continued support.
  • Basic TPU Integration Test: To ensure robust TPU support, a basic integration test has been added. This test validates the functionality of TPU-specific features, improving reliability for users deploying vLLM on TPU infrastructure.
  • Enhanced Documentation: The release includes detailed documentation updates, such as the class hierarchy in vLLM and a comprehensive explanation of its integration with Hugging Face. These improvements make it easier for new users to understand the architecture and for developers to extend the system effectively.
  • Expanded Benchmarking Support: Benchmark throughput now includes support for image input, enabling users to measure performance for vision-language models and other image-based tasks. This feature broadens vLLM’s applicability to multi-modal applications, ensuring comprehensive performance insights across diverse workloads.

These improvements enhance usability, encourage community participation, and expand vLLM's capabilities, making this release a significant step forward in delivering a polished and developer-friendly inference platform.

Bug Fixes and Deprecations in vLLM v0.6.4

The vLLM v0.6.4 release addresses several bugs, improving system stability and functionality across a range of use cases. Notable fixes include resolving issues with latency in specific configurations during multi-model deployments, which previously impacted high-throughput applications. Additionally, speculative decoding reliability has been enhanced by addressing edge cases that could cause inconsistencies in high-QPS scenarios. These fixes ensure that vLLM continues to deliver dependable performance, even under demanding conditions.

In terms of deprecations, this release officially ends support for Python 3.8. As Python 3.8 has reached the end of its lifecycle, this change allows the project to leverage newer Python features and focus on actively supported versions, including Python 3.9 and later. Users are encouraged to update their environments to ensure compatibility with future vLLM releases.

These bug fixes and deprecations highlight vLLM’s commitment to maintaining a robust and forward-looking framework, ensuring users can rely on a stable and evolving platform for their inference needs.

Conclusion

The vLLM v0.6.4 and v0.6.4.post1 releases bring exciting improvements, from new model support to refined performance optimizations. Whether you’re a developer deploying large-scale applications or a researcher experimenting with cutting-edge techniques, these updates enhance the vLLM experience.

We invite you to explore the latest version, share your feedback, and join our bi-weekly vLLM Office Hours to stay ahead of the latest developments.

Additional Resources

Dive deeper into the details of the v0.6.4 and v0.6.4.post1 releases with the following resources. Whether you're looking for technical documentation, want to share feedback, or connect with the community, we've got you covered:

Have questions or feedback? Join the discussion and connect with us:

These resources are your gateway to learning, contributing, and staying engaged with the vibrant vLLM community. See you there!

Was this article helpful?
YesNo
Icon

Author(s)

Icon

Join the Conversation