Skip to main content

Fairness in Serving Large Language Models

Publication ,  Conference
Sheng, Y; Cao, S; Li, D; Zhu, B; Li, Z; Zhuo, D; Gonzalez, JE; Stoica, I
Published in: Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024
January 1, 2024

High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide range of requests from short chat conversations to long document reading. To ensure that all client requests are processed fairly, most major LLM inference services have request rate limits, to ensure that no client can dominate the request queue. However, this rudimentary notion of fairness also results in under-utilization of the resources and poor client experience when there is spare capacity. While there is a rich literature on fair scheduling, serving LLMs presents new challenges due to their unpredictable request lengths and their unique batching characteristics on parallel accelerators. This paper introduces the definition of LLM serving fairness based on a cost function that accounts for the number of input and output tokens processed. To achieve fairness in serving, we propose a novel scheduling algorithm, the Virtual Token Counter (VTC), a fair scheduler based on the continuous batching mechanism. We prove a 2× tight upper bound on the service difference between two backlogged clients, adhering to the requirement of work-conserving. Through extensive experiments, we demonstrate the superior performance of VTC in ensuring fairness, especially in contrast to other baseline methods, which exhibit shortcomings under various conditions. The reproducible code is available at https://github.com/Ying1123/VTC-artifact.

Duke Scholars

Published In

Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024

Publication Date

January 1, 2024

Start / End Page

965 / 988
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Sheng, Y., Cao, S., Li, D., Zhu, B., Li, Z., Zhuo, D., … Stoica, I. (2024). Fairness in Serving Large Language Models. In Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024 (pp. 965–988).
Sheng, Y., S. Cao, D. Li, B. Zhu, Z. Li, D. Zhuo, J. E. Gonzalez, and I. Stoica. “Fairness in Serving Large Language Models.” In Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024, 965–88, 2024.
Sheng Y, Cao S, Li D, Zhu B, Li Z, Zhuo D, et al. Fairness in Serving Large Language Models. In: Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024. 2024. p. 965–88.
Sheng, Y., et al. “Fairness in Serving Large Language Models.” Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024, 2024, pp. 965–88.
Sheng Y, Cao S, Li D, Zhu B, Li Z, Zhuo D, Gonzalez JE, Stoica I. Fairness in Serving Large Language Models. Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024. 2024. p. 965–988.

Published In

Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024

Publication Date

January 1, 2024

Start / End Page

965 / 988