← ML Research Wiki / 2309.06180

Efficient Memory Management for Large Language Model Serving with PagedAttention

Woosuk Kwon UC Berkeley, Zhuohan Li UC Berkeley, Siyuan Zhuang UC Berkeley, Ying Sheng UC Berkeley Stanford University 3 Independent Researcher 4 UCSan Diego, Lianmin Zheng UC Berkeley, Cody Hao Yu, Joseph E Gonzalez UC Berkeley, Hao Zhang, Ion Stoica UC Berkeley (2023)

Paper Information
arXiv ID
Venue
Symposium on Operating Systems Principles
Domain
computer science / artificial intelligence
SOTA Claim
Yes
Code
Reproducibility
9/10

Abstract

High throughput serving of large language models (LLMs) requires batching sufficiently many requests at a time.However, existing systems struggle because the key-value cache (KV cache) memory for each request is huge and grows and shrinks dynamically.When managed inefficiently, this memory can be significantly wasted by fragmentation and redundant duplication, limiting the batch size.To address this problem, we propose PagedAttention, an attention algorithm inspired by the classical virtual memory and paging techniques in operating systems.On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce memory usage.Our evaluations show that vLLM improves the throughput of popular LLMs by 2-4× with the same level of latency compared to the state-of-the-art systems, such as FasterTransformer and Orca.The improvement is more pronounced with longer sequences, larger models, and more complex decoding algorithms.vLLM's source code is publicly available at https://github.com/vllm-project/vllm.

Summary

This paper introduces PagedAttention and presents vLLM, a high-throughput serving system for large language models (LLMs). The authors identify the problem of inefficient memory management in existing LLM serving systems, particularly regarding the key-value (KV) cache memory. They propose an algorithm, PagedAttention, which allows for dynamic and non-contiguous memory allocation, reducing fragmentation and enabling memory sharing among requests. The vLLM system achieves significant improvements in throughput (2-4 times) compared to leading systems like FasterTransformer and Orca. The paper evaluates vLLM using various models and workloads, demonstrating its capacity to handle long sequences and complex decoding algorithms while maintaining latency. Additionally, the paper discusses the architecture of vLLM and its implementation details, highlighting how it addresses memory challenges and optimizes performance for high-demand LLM applications.

Methods

This paper employs the following methods:

  • PagedAttention

Models Used

  • GPT
  • OPT
  • LLaMA

Datasets

The following datasets were used in this research:

  • ShareGPT
  • Alpaca

Evaluation Metrics

  • Throughput
  • Normalized latency

Results

  • vLLM improves throughput by 2-4× compared to state-of-the-art systems.

Limitations

The authors identified the following limitations:

  • Existing systems struggle with KV cache memory management due to static allocation and fragmentation.

Technical Requirements

  • Number of GPUs: 4
  • GPU Type: NVIDIA A100

Keywords

large language models memory management virtual memory paging attention mechanisms

Papers Using Similar Methods

External Resources