skip to content

Department of Computer Science and Technology

  • Associate Professor in Machine Learning

I'm interested in the principles of machine learning, with a particular focus on modern deep learning methods. My research falls into the following themes:

  • Optimization, Generalization and Transferability in Deep Learning: Why do deep networks generalise? What is the role of stochastic gradient descent? What is the role of the (over)parametrization of neural networks? Can we design optimisation algorithms that find better minima? Why does transfer learning work so well in neural networks?
  • Mathematical Models of Emergent Behaviours: It is clear that notions of statistical generalisation are insufficient to describe the range of "intelligent behaviours" LLMs display: in-context learning, reasoning, extrapolation of rules to unseen data. We need new mathematical models of these behaviours so we can reason about why we see them, and what aspects of training give rise to them.
  • Unsupervised Representation Learning: What are the hallmarks of good neural representations of data, and how do we discover these from data without labels? How can we formalize the goals and principles of unsupervised representation learning, of self-supervised learning, of transfer learning? Can we understand why self-supervised learning works so well in vision and NLP?
  • Probabilistic foundations: How should we represent uncertainty about models in deep learning? Is Bayesian inference a good principle for deep learning? Can we develop alternative ways of quantifying uncertainty that line up better with our goal? Can we develop practical inference algorithms in prediction-oriented, loss-calibrated situations?
  • Causal Inference and Identifiability: The world of causal inference is full of non-identifiability: that certain causal relationships cannot be inferred from observational data alone. However, we see that humans as well as machine learning systems can often recover correct causal information in spite of this limitation. We study situations where causal relationships are identifiable and the role inductive biases play in identifying causal relationships where data provides insufficient constraints.

Biography

I finished my PhD in Machine Learning at the Cambridge University Engineering Department in 2013 focussing on Bayesian inference, nonparametric and kernel methods. I then worked in the London technology sector: in various tech startups and briefly in venture capital. I served as Principal Research Scientist at deep learning startup Magic Pony Technology, where we focussed on applying deep learning to the problem of lossy image and video compression. Following the acquisition of Magic Pony by Twitter in 2016, I served as Senior Machine Learning Researcher at Twitter where I worked on a range of ML-related projects including computer vision, recommender systems and helped set up Twitter's ML Ethics, Transparency and Accountability (META) team. I joined the Department of Computer Science and Technology in 2020.

Other than my university role, I run AI retreats for high-school students in Hungary, and have set up a scholarship fund for talented students from Ukraine.

Teaching

Publications

For a list of publications, please refer to my google scholar page.

Contact Details

Room: 
FE03
Office phone: 
(01223) 7-63626
Email: 

fh277@cam.ac.uk