Skip to main content

FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models

Publication ,  Conference
Sun, J; Xu, Z; Yin, H; Yang, D; Xu, D; Liu, Y; Du, Z; Chen, Y; Roth, HR
Published in: Proceedings of Machine Learning Research
January 1, 2024

Pre-trained language models (PLM) have revolutionized the NLP landscape, achieving stellar performances across diverse tasks. These models, while benefiting from vast training data, often require fine-tuning on specific data to cater to distinct downstream tasks. However, this data adaptation process has inherent security and privacy concerns, primarily when leveraging user-generated, device-residing data. Federated learning (FL) provides a solution, allowing collaborative model fine-tuning without centralized data collection. However, applying FL to finetune PLMs is hampered by challenges, including restricted model parameter access due to the high encapsulation, high computational requirements, and communication overheads. This paper introduces Federated Black-box Prompt Tuning (FedBPT), a framework designed to address these challenges. FedBPT allows the clients to treat the model as a black-box inference API. By focusing on training optimal prompts and utilizing gradient-free optimization methods, FedBPT reduces the number of exchanged variables, boosts communication efficiency, and minimizes computational cost and memory consumption. Experiments highlight the framework’s ability to drastically cut communication and memory costs while maintaining competitive performance. Ultimately, FedBPT presents a promising solution for efficient, privacy-preserving fine-tuning of PLM in the age of large language models. Our code is available in NVIDIA FLARE.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2024

Volume

235

Start / End Page

47159 / 47173
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Sun, J., Xu, Z., Yin, H., Yang, D., Xu, D., Liu, Y., … Roth, H. R. (2024). FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models. In Proceedings of Machine Learning Research (Vol. 235, pp. 47159–47173).
Sun, J., Z. Xu, H. Yin, D. Yang, D. Xu, Y. Liu, Z. Du, Y. Chen, and H. R. Roth. “FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models.” In Proceedings of Machine Learning Research, 235:47159–73, 2024.
Sun J, Xu Z, Yin H, Yang D, Xu D, Liu Y, et al. FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models. In: Proceedings of Machine Learning Research. 2024. p. 47159–73.
Sun, J., et al. “FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models.” Proceedings of Machine Learning Research, vol. 235, 2024, pp. 47159–73.
Sun J, Xu Z, Yin H, Yang D, Xu D, Liu Y, Du Z, Chen Y, Roth HR. FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models. Proceedings of Machine Learning Research. 2024. p. 47159–47173.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2024

Volume

235

Start / End Page

47159 / 47173