skip to content

Department of Computer Science and Technology

Date: 
Tuesday, 28 January, 2025 - 15:00 to 16:00
Speaker: 
Dongqi Cai - Beijing University of Posts and Telecommunications
Venue: 
Computer Lab, FW26

Training large language models (LLMs) is often seen as a resource-intensive task, requiring massive computational power centralized in data centers. But what if you could train powerful models directly on your everyday devices? In this talk, we introduce cutting-edge techniques that bring efficient LLM training to mobile and edge devices, overcoming constraints like limited memory, processing power, and network bandwidth. We present novel methods, including adaptive federated learning and backpropagation-free optimization for cross-device collaboration. These innovations empower large-scale decentralized learning, reducing system costs while maintaining high performance and privacy. Join this talk to explore how this research is reshaping on-device AI, making LLM fine-tuning practical, efficient, and closer than ever to your fingertips.

Bio
Mr. Dongqi Cai is a fourth-year PhD student at Beijing University of Posts and Telecommunications, currently a visiting PhD student in Prof. Nicholas D. Lane’s group at the University of Cambridge. His research focuses on efficient on-device machine learning systems. He has authored 12 papers as the first or corresponding author, including 7 in top-tier venues such as ACM MobiCom, USENIX ATC, NeurIPS, ACM Computing Surveys, and IEEE Transactions on Big Data. He has received multiple National PhD Scholarships and serves as PC members for leading conferences's AE committee like ACM MobiCom and ACM MobiSys, while also reviewing for prestigious journals including IEEE TMC, IEEE TSC, and IEEE TKDE.

Seminar series: 
Cambridge ML Systems Seminar Series

Upcoming seminars