skip to content

Department of Computer Science and Technology

Date: 
Monday, 4 August, 2025 - 11:00 to 12:00
Speaker: 
Salma Kharrat, Kaust
Venue: 
Computer Lab, FW26

In both federated learning (FL) and large language model (LLMs) optimization, a central challenge is effective learning under constraints, ranging from data heterogeneity and personalization to limited communication and black-box access. In this talk, I present three approaches that address these challenges across different settings. FilFL improves generalization in FL by filtering clients based on their joint contribution to global performance. DPFL tackles decentralized personalization by learning asymmetric collaboration graphs under strict resource budgets. Moving beyond FL, I will present ACING, a reinforcement learning method for optimizing instructions in black-box LLMs under strict query budgets, where weights and gradients are inaccessible. While these works tackle distinct problems, they are unified by a common goal: developing efficient learning mechanisms that perform reliably under real-world constraints.

Seminar series: 
Cambridge ML Systems Seminar Series

Upcoming seminars