As part of the application for the programme, you will be asked to select two projects that you'd be interested in working on. Below you can find the proposals for all of the available projects.
Project supervisor: Dr Soumya Banerjee
Essential knowledge, skills and attributes:
Programming experience in Python, knowledge of LLMs and a lot of curiosity!
Project description:
For half a century, artificial intelligence research has attempted to reproduce the human qualities of abstraction and reasoning - creating computer systems that can learn new concepts from a minimal set of examples, in settings where humans find this easy. While specific neural networks are able to solve an impressive range of problems, broad generalization to situations outside their training data has proved elusive. In this work, we will look at several novel approaches for solving the Abstraction & Reasoning Corpus (ARC), a dataset of abstract visual reasoning tasks introduced to test algorithms on broad generalization [1, 2, 3].
It has been suggested that large language models cannot reason. This project will infuse some reasoning/priors into large language models and apply them to a large reasoning corpus (Abstraction and Reasoning Corpus).
We will also apply large language models and large vision models/multimodal models to these problems.
References:
[1] https://github.com/fchollet/ARC
[2] https://blog.jovian.ai/finishing-2nd-in-kaggles-abstraction-and-reasoning-challenge-24e59c07b50a
[3] https://github.com/alejandrodemiquel/ARC_Kaggle
Project supervisor: Professor Alan Blackwell
Co-supervisor: Akeelah Bertram (Cavendish Arts and Science Fellow)
Essential knowledge, skills and attributes:
Some experience of developing web or mobile applications is essential. An interest in computing in the arts and humanities, ideally with experience of creative imagery, music or other artistic practice, will be helpful.
Project description:
This experimental digital arts project aims to create an immersive experience that explores the human relationship with water, working across different cultures, and merging ancient and modern perspectives and technologies. It will apply design concepts from generative AI and layer-based visual programming languages. The specific tasks will involve the creation of an interactive app that uses environmental data transmitted from a global network of sensors. It could also extend to development of innovative visualizations and soundscapes that will be incorporated into exhibitions or performances. The experimental work will involve interacting directly with arts researchers in Cambridge, while using and extending advanced technologies. Broad interests, including philosophy and experimental arts, will be welcome. This project would be valuable experience for students planning to do further research or work in digital arts, digital humanities, or other fields of contemporary art, performance and design.
Project supervisors: Dr Hongyu Zhou & Ryan Daniels
Essential knowledge, skills and attributes:
You should have a good understanding of data analysis with Python.
- Essential: Python, Pandas, Numpy
- Nice to have: Machine learning knowledge and experience with scikit-learn
We welcome applicants from all academic disciplines.
Project description:
DeepMind has recently identified five main areas where AI can accelerate scientific discovery including: how we digest and communicate knowledge; generating and annotating massive datasets; simulating complex experiments; modeling complex physical systems; narrowing down large search spaces.
At the Accelerate Programme for Scientific Discovery, we engage with hundreds of people across the University of Cambridge – from historians to theoretical physicists. This project aims to analyse consultation session data to map the landscape of AI adoption and implementation across the university. We will analyse patterns in AI usage across different departments, and try to identify emerging trends and potential areas for cross-disciplinary collaboration. Through this work, we aim to generate insights that will inform future AI training and support initiatives across the university. The successful candidate will gain hands-on experience working with real-world AI implementation data, exposure to diverse AI applications across academic disciplines, and the opportunity to contribute to shaping AI strategy at a leading university.
Project supervisors: Richard Diehl-Martinez, Prof Paula Buttery & Ryan Daniels
Essential knowledge, skills and attributes:
You should be familiar with Python and Pytorch and have an interest in large language models. Any experience with web development frameworks and serving machine learning endpoints is a bonus (e.g. Next.js, Svelte, Docker, FastAPI, Redis, etc.).
Applicants from diverse academic backgrounds are encouraged to apply.
Project description:
Pico is a lightweight research framework that demystifies how language models learn. Built with simplicity in mind, it provides an efficient way to train and study models of different sizes. By understanding how models learn, we can inform and improve the way we build and train them. Pico provides a controlled environment where the only variable is model size. This allows researchers to isolate the effects of model scale on learning dynamics, offering insights into how different architectures perform under identical conditions.
In this project, we will expand Pico to include methods to quantify and visualize learning dynamics. The aim is to design and implement a web-based framework that allows machine learning researchers to monitor the ‘health’ of model training in real-time. The framework will provide interpretable and interactive visualizations, enabling users to track and understand learning dynamics easily and intuitively.
Project supervisor: Dr Sam Nallaperuma-Herzberg
Co-supervisors: Professor Pietro Lio and Professor Neil Lawrence
Essential knowledge, skills and attributes:
Experience programming in Python or similar language. Knowledge of AI will be an advantage but is not essential.
Project description:
You will have the opportunity to conduct a literature survey with the aim of choosing a suitable database, fine-tuning techniques and research in the most effective foundation models for music generation. Using these existing foundational models with effective text conditioning, real time inferencing and music style transfer techniques will be investigated.
You will have the opportunity to extend this research further to develop foundation models which provide music therapeutic support for depression, stress, sleep and neurodiversity.
Extra advancements depending on the number of students and time:
- Develop a mobile phone application where users can validate the developed music therapy generator.
- Conduct a small-scale human participant study with the developed mobile application to validate the effectiveness of the developed therapy.
Successful applicants will get hands-on training on how to use AI libraries and train models during the project. Moreover, you will have the opportunity to develop team working skills, network and receive valuable feedback from other relevant experts in the Accelerate programme and the Department of Computer Science and Technology as well as the team and collaborators of the BrainTwin project including the Department of Engineering, Cambridge Neuroscience, Department of Psychology, Faculty of Music, Alan Turing Institute, Royal Papworth hospital and University College London Hospitals.
Project supervisor: Dr Sam Nallaperuma-Herzberg
Co-supervisors: Professor Pietro Lio and Professor Neil Lawrence
Essential knowledge, skills and attributes:
Experience programming in Python or similar language. Knowledge of AI will be an advantage but is not essential.
Project description:
This project will investigate the efficacy of pre-trained language models such as Llama, BART and Chat-GPT on producing Cognitive Behavioural Therapy, considering the aspects of cognitive restructuring, psychoeducation and relaxation and mindfulness techniques. The approaches to curate Cognitive Behavioural Therapy datasets using existing content as well as domain expert input will be considered. Moreover, techniques to finetune the pre-trained language models using techniques such as low rank adaptation (LoRA) and retrieval augmented generation (RAG) will be investigated.
You will have the opportunity to extend this research further to develop LMs to provide therapeutic support for stress, sleep and neurodiversity.
Successful applicants will get hands-on training on how to use AI libraries and train models during the project. Moreover, you will have the opportunity to develop team-working skills, network with others and receive valuable feedback from other relevant experts in the Accelerate program and the Department of Computer Science and Technology as well as the team and collaborators of the BrainTwin project including the Department of Engineering, Cambridge Neuroscience, Department of Psychology, Faculty of Music, Alan Turing Institute, Royal Papworth hospital and University College London Hospitals.
Project supervisor: Zeyu (Jamie) Cao
Co-supervisor: Professor Nic Lane
Essential knowledge, skills and attributes:
- Low-level Programming experience (C/C++)
- Experience with LLMs
- Experience with edge development
- An understanding of Machine Learning is an advantage
Project description:
As we enter a time in which Large Language Models (LLMs) are transforming the world, it is essential that the capability of Neural Processing Units (NPUs) is thoroughly understood, and that we explore the potential landscape for complex optimisation problems, such as energy efficiency, compute throughput and latency.
It is important to understand the performance trade-offs for a target platform and how the inference model can best accommodate such trade-offs and achieve a balance between NPU capability and model capability. The variation of trade-offs could lead to further progress in co-designing a model that could be more secure or lower latency.
This project will first attempt to profile the various constraints and capability of NPUs and then use this performance framework to reason on various inference techniques. This would provide a foundation for further analysis in design space optimisation for LLM architecture co-design for NPU and edge inference.
Project supervisor: Dr Roly Perera
Co-supervisor: Dr Cristina David (University of Bristol)
Essential knowledge, skills and attributes:
A strong background in at least one of software engineering, maths and natural sciences is required.
Project description:
This is an exciting opportunity to apply AI in the context of a new programming language designed to make climate science more open, intelligible and accessible.
Charts and other visual summaries created by journalists and scientists using data from observations and simulations are how we understand our changing world. But interpreting these complex artifacts is a significant challenge, even for experts with access to the underlying data and source code. Fluid is a new kind of “transparent” programming language, being developed at the Institute of Computing for Climate Science in Cambridge (ICCS), that can be used to create charts and figures which are linked to data. A curious reader can discover what visual elements actually represent by interacting with the chart in various ways. For more information see our demos at https://f.luid.org and our poster at https://dynamicaspects.org/papers/fluid-poster.pdf.
There are two novel AI applications driving the next iteration of Fluid. Both are somewhat unusual in that they make use of (black-box) LLMs in the service of transparency:
A) Extending Fluid with computational explanations: information about the specific steps that were involved in computing a particular feature of the output (e.g. the whiskers decorating a bar in a bar chart). This is a potentially powerful transparency feature, allowing readers (perhaps during peer review) to discover otherwise hidden or obscured facts about the data underpinning a visualisation. One internship project will involve turning these computational explanations into more user-friendly natural language explanations that would be useful for lay readers as well as expert readers. The novel contribution that Fluid can make to this problem is to provide an authoritative ground truth for the generation of the natural language, offering the prospect of a “trusted” or reliable form of open, self-explanatory artifact. (See “AI reading assistant” in the poster.)
B) Extending Fluid with transparent text: natural language (such as the expository text in a climate report for policy makers) which is underwritten by a semi-formal computational interpretation, especially quantitative phrases or other fragments of text expressing data-driven claims. For example, the statement that under a particular emissions scenario, global warming is _extremely likely_ to exceed 2°C in the 21st century can be underwritten by a Fluid program that assigns a specific interpretation to this text in terms of the distributions of the underlying data used (by the report author) to reach that conclusion. Another internship project will be to develop AI tooling which replaces fragments of text by expressions that compute that text from data. (See “AI authoring assistant” in the poster.)
This will be an opportunity for a strong, scientifically-minded student to learn more about the application of AI to open science and science communication. The project will be run in partnership with the ICCS, which can provide careers advice and networking opportunities for students interested in working at this intersection. Successful applicants can expect to gain experience in AI tooling for software engineering, the mapping between formal languages and natural language, and data visualisation. They will work in close (daily) collaboration with the PI and will be encouraged to present their work to researchers and data scientists at the ICCS, the Department of Computer Science and Technology, and our collaborators at The Alan Turing Institute and University of Bristol.
Project supervisor: Professor Hatice Gunes
Essential knowledge, skills and attributes:
Interdisciplinary candidates, particularly those with a background in psychology/social science, are encouraged to apply.
Project description:
Given the increasing prevalence of robots in our daily lives and how ML bias and explainability are also becoming an increasing source of concern, advancing explainable and fair human-robot interaction (HRI) becomes an increasingly important undertaking [1,2,3]. In concurrence, existing work has also highlighted the intricate relationship between explanations and fairness and that human perception of ML explainability and fairness has yet to be thoroughly and systematically explored [4, 5, 6, 7]. The project thus focuses on how people recover from their mistakes by explaining themselves, how explanations impact a user’s perception of fairness, and how these strategies can be leveraged by social robots within different HRI settings. This user-based or human-centred data collection study will focus on the following:
- Collecting data on how people recover from their mistakes by explaining themselves and how we can adapt this to social robots or HRI settings,
- Evaluating how the different explanations correspond to user-perceived fairness,
- Investigating how explanations can be leveraged to improve overall fairness and vice versa in HRI settings.
This project thus aims to understand how to advance explainable and fair ML via the use of human-centred approaches for HRI settings and is expected to contribute to the theoretical aspects of human-robot interaction [8].
References:
[1] Dogan, F.I., Ozyurt, U., Cinar, G. and Gunes, H., 2024. GRACE: Generating Socially Appropriate Robot Actions Leveraging LLMs and Human Explanations. arXiv preprint arXiv:2409.16879.
[2] Cheong, J., Spitale, M., Gunes, H. . Small but Fair! Fairness for Multimodal Human-Human and Robot-Human Mental Wellbeing Coaching.
[3] Londoño, L., Hurtado, J.V., Hertz, N., Kellmeyer, P., Voeneky, S. and Valada, A., 2024. Fairness and Bias in Robot Learning. Proceedings of the IEEE.
[4] Shulner-Tal, A., Kuflik, T., Kliger, D., & Mancini, A. (2024). Who Made That Decision and Why? Users’ Perceptions of Human Versus AI Decision-Making and the Power of Explainable-AI. International Journal of Human-Computer Interaction, 1-18.
[5] Haghighi, F., 2024. Evaluating End-Users’ Perspectives Toward Explainability in Artificial Intelligence.
[6] Hajigholam Saryazdi, A., 2024, May. Algorithm Bias and Perceived Fairness: A Comprehensive Scoping Review. In Proceedings of the 2024 Computers and People Research Conference (pp. 1-9).
[7] Narayanan, D., Nagpal, M., McGuire, J., Schweitzer, S., & De Cremer, D. (2024). Fairness perceptions of artificial intelligence: A review and path forward. International Journal of Human-Computer Interaction, 40(1), 4-23.
[8] Chen, H., Alghowinem, S., Breazeal, C., & Park, H. W. (2024, March). Integrating Flow Theory and Adaptive Robot Roles: A Conceptual Model of Dynamic Robot Role Adaptation for the Enhanced Flow Experience in Long-term Multi-person Human-Robot Interactions. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (pp. 116-126).
Project supervisor: Dr Eiko Yoneki
Co-supervisor: Zak Singh
Essential knowledge, skills and attributes:
Strong interest in tensor program optimisation. Some knowledge/interest in Bayesian Optimisation, Python, with some knowledge in C++.
Project description:
Tensor codes are run on massively parallel hardware. When generating tensor code (also called auto-scheduling), TVM Compiler [1] needs to search through many parameters. A State-of-the-art auto-scheduler is Ansor [2], which applies rule-based mutation to generate tensor code templates and then fine-tunes those templates via Evolutionary Search. We think Bayesian Optimisation (BayesOpt) [3] could be a better approach to efficiently search the tensor code templates than Evolutionary Search.
At first, TVM is set up for benchmarking with ResNet and Bert using CPU and GPU (possibly a few different types of GPU). Next the same benchmarking with NVIDIA's Compiler Cutlass should be experimented. Afterwards, exploring using BayesOpt for high-performance tensor code generation, and benchmark black-box algorithms for tensor code generation. The main interface for tensor code generation in TVM will be through MetaScheduler [6], which provides a fairly simple Python interface for various search methodologies [7]. We also have a particular interest in tensor code generation for tensor cores, which are equipped by the recent generation of GPUs (since the Turing micro-architectures) as domain-specific architectures to massively accelerate tensor programs. The project can take advantage of former ACS student's work [9][10], and set the focus on the performance improvement on GPU, Multi-objective BO, and scalability.
References:
[1] TVM: An Automated End-to-End Optimizing Compiler for Deep Learning
[2] Ansor: Generating High-Performance Tensor Programs for Deep Learning
[3] A Tutorial on Bayesian Optimization
[4] HEBO Pushing The Limits of Sample-Efficient Hyperparameter Optimisation
[5] Are Random Decompositions all we need in High Dimensional Bayesian Optimisation?
[6] Tensor Program Optimization with Probabilistic Programs
[7] https://github.com/apache/tvm/blob/4267fbf6a173cd742acb293fab4f77693dc4b887/python/tvm/meta_schedule/search_strategy/search_strategy.py#L238.
[8] NVIDIA Compiler
[9] https://github.com/hgl71964/unity-tvm
[10] Discovering Performant Tensor Programs with Bayesian Optimization
Project supervisor: Dr Eiko Yoneki
Co-supervisor: Taiyi Wang
Essential knowledge, skills and attributes:
Experience with Python and deep reinforcement learning.
Project description:
The Deep learning recommender model (DLRM) is one of the most important applications of deep learning. The challenge of DLRMs is to shard the embedding table across multiple devices. This involves column-wise and row-wise sharding. Neuroshard [1] proposes to use a DNN as a cost model to guide the search of a sharding strategy, and then it uses a combination of beam search and greedy search to find the sharding strategy. We would like to see whether deep reinforcement learning (RL) can search for a better sharding strategy. Your solution should be compared and benchmarked with [1][2].
References:
[1] Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models
[2] AutoShard: Automated Embedding Table Sharding for Recommender Systems
Project supervisor: Dr Eiko Yoneki
Co-supervisor: Youhe Jiang
Essential knowledge, skills and attributes:
Proficiency in Python/CUDA/C++ programming and an understanding of Large Language Model (LLM) inference and serving.
Project description:
Serving multiple LLMs (such as GPT, Llama, OPT, Falcon etc.) efficiently in heterogeneous clusters presents unique challenges due to varying computational power, communication bandwidth, memory bandwidth, and memory limits across different types of GPUs [1]. The project aims to extend the capabilities of multi-model serving [2] to heterogeneous clusters effectively. The initial phase involves setting up a benchmarking suite to evaluate different model-serving frameworks like vLLM [3] and DistServe [4] on various cluster configurations. Subsequently, the project will focus on designing a custom LLM serving framework that leverages dynamic resource allocation to optimize for throughput and latency across heterogeneous environments. This involves developing algorithms for intelligent job scheduling and resource management that consider the unique characteristics of each cluster node. The goal is to enhance the efficiency and scalability of serving multiple models in diverse computing environments, which is critical for applications in areas like autonomous driving and real-time data analytics. There is an ongoing project in our group on the above topic, and the intern student can take advantage of the platform built already and focus on benchmarking tasks and an extension of scheduling algorithms.
References:
[1] Jiang, Youhe, et al. "HexGen: Generative Inference of Large Language Model over Heterogeneous Environment." Forty-first International Conference on Machine Learning.
[2] Duan, Jiangfei, et al. "MuxServe: Flexible Spatial-Temporal Multiplexing for Multiple LLM Serving." Forty-first International Conference on Machine Learning.
[3] Kwon, Woosuk, et al. "Efficient memory management for large language model serving with paged attention." Proceedings of the 29th Symposium on Operating Systems Principles. 2023.
[4] Zhong, Yinmin, et al. "{DistServe}: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving." 18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24). 2024.
Project supervisor: Dr Eiko Yoneki
Co-supervisor: Taiyi Wang
Essential knowledge, skills and attributes:
Knowledge of Reinforcement Learning, Gaussian Processes, Bayesian Optimisation and Parallel Computing.
Project description:
Hyperparameter tuning of large-scale deep learning models, such as ResNet-18, VGG-19, ResNet-50, and ResNet-152, is a critical but computationally intensive task due to the large search space. This project proposes a novel hyperparameter tuning algorithm that integrates Monte Carlo Tree Search (MCTS) and Bayesian Optimisation (BO). MCTS, known for its efficiency in large and high-dimensional spaces, will be employed to partition the hyperparameter space into smaller subspaces, which inherently enables parallelism. MCTS's probabilistic and heuristic-driven approach makes it lightweight and well-suited for black-box optimisation [1, 3]. On the other hand, BO is particularly efficient in smaller parameter spaces, making it an ideal choice for searching within the partitions created by MCTS. By combining MCTS's efficiency in high-dimensional spaces with BO's efficiency in lower-dimensional spaces, this approach aims to significantly enhance the speed and performance of hyperparameter tuning. Some suggested benchmarks could be Cifar10 and Cifar100 datasets and compared against existing methods in terms of computational efficiency, accuracy improvements, and resource allocation strategies.
References:
[1] Wang L, et al. Learning search space partition for black-box optimization using Monte Carlo tree search.
[2] Paul et al.: Fast efficient hyperparameter tuning for policy gradients. NIPS 2019.
[3] Facebook Research. LaMCTS: Large-scale Parallel Architecture Search Using Monte Carlo Tree Search.
[4] X. Dong and Y. Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. 2020.
[5] Zhang, Baohe, et al.: On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning