skip to content

Department of Computer Science and Technology

Date: 
Friday, 7 November, 2025 - 11:00 to 12:00
Speaker: 
Finn Anderson
Venue: 
Computer Lab, SS03

Distributed training faces fundamental challenges from client heterogeneity in compute, memory, and network conditions. Existing approaches use staleness-dependent decay, per-client adjustments, or distance-weighted averaging, but often lack substantial convergence guarantees. I present DISCO, a control-theoretic framework that recasts federated learning as a linear time-delay system. Using Lyapunov stability analysis, DISCO derives lightweight online adjustments with verifiable convergence bounds. Deployed on a Raspberry Pi 4 cluster, DISCO achieves 3.0–4.0× faster time-to-accuracy across text classification benchmarks. This work demonstrates how dynamical-systems theory enables provably efficient federated learning on commodity hardware.

Bio: Finn is an Engineer at Cedana AI with an MSc in Computer Science from UCL. His background spans machine learning and financial mathematics, with a focus on distributed training systems. His research explores homomorphic encryption, hardware acceleration, and distributed algorithms for efficient training at scale. Finn is particularly passionate about hardware-aware optimization and designing training workloads for commodity devices.

Seminar series: 
Cambridge ML Systems Seminar Series

Upcoming seminars