skip to content

Department of Computer Science and Technology

Read more at: Title to be confirmed

Title to be confirmed

Friday, 20 June, 2025 - 12:00 to 13:00

Abstract not available


Read more at: Title to be confirmed

Title to be confirmed

Friday, 13 June, 2025 - 12:00 to 13:00

Abstract not available


Read more at: Measuring Political Bias in Large Language Models

Measuring Political Bias in Large Language Models

Friday, 16 May, 2025 - 12:00 to 13:00

Large language models (LLMs) are helping millions of users to learn and write about a diversity of issues. In doing so, LLMs may expose users to new ideas and perspectives, or reinforce existing knowledge and user opinions. This creates concerns about political bias in LLMs, and how these biases might influence LLM users...


Read more at: When is Multilinguality a Curse? Language Modeling for 350 Languages

When is Multilinguality a Curse? Language Modeling for 350 Languages

Friday, 6 June, 2025 - 15:00 to 16:00

NOTE THE UNUSUAL TIME FOR THIS SEMINAR Language models work well for a small number of languages. For the other languages, the best existing language model is likely multilingual, still with the vast majority of the training data coming from English and a few "priority" languages. We show that in many cases...


Read more at: Research Progress in Mechanistic Interpretability

Research Progress in Mechanistic Interpretability

Friday, 9 May, 2025 - 12:00 to 13:00

The goal of Mechanistic Interpretability research is to explain how neural networks compute outputs in terms of their internal components. But how much progress has been made towards this goal? While a large amount of Mechanistic Interpretability research has been produced by academia, frontier AI companies such as Google...


Read more at: Title to be confirmed

Title to be confirmed

Friday, 23 May, 2025 - 12:00 to 13:00

Abstract not available


Read more at: Asymmetry in Supposedly Equivalent Facts: Pre-training Bias in Large Language Models

Asymmetry in Supposedly Equivalent Facts: Pre-training Bias in Large Language Models

Friday, 2 May, 2025 - 12:00 to 13:00

Understanding and mitigating hallucinations in Large Language Models (LLMs) is crucial for ensuring reliable content generation. While previous research has primarily focused on “when” LLMs hallucinate, our work explains “why” and directly links model behaviour to the pre-training data that forms their prior knowledge...


Read more at: LLMs as supersloppers and other metaphors

LLMs as supersloppers and other metaphors

Friday, 7 February, 2025 - 12:00 to 13:00

Abstract: The interdisciplinary pilot project `Exploring novel figurative language to conceptualise Large Language Models’ is funded by Cambridge Language Sciences. This talk mainly concerns `slop', by which we mean text delivered to a reader which is of little or no value to them (or is even harmful) or is so verbose or...


Read more at: Analysing Memorisation in Classification and Translation through Localisation and Cartography

Analysing Memorisation in Classification and Translation through Localisation and Cartography

Friday, 24 January, 2025 - 12:00 to 13:00

Memorisation is a natural part of learning from real-world data: neural models pick up on atypical input-output combinations and store those training examples in their parameter space. That this happens is well-known, but which examples require memorisation and where in the millions (or billions) of parameters memorisation...


Read more at: Title to be confirmed

Title to be confirmed

Friday, 30 May, 2025 - 12:00 to 13:00

Abstract not available