skip to content

Department of Computer Science and Technology

Read more at: NLIP 2024 Social: Meet New PhD Students

NLIP 2024 Social: Meet New PhD Students

Friday, 11 October, 2024 - 12:00 to 13:00

For the first seminar of the year, we welcome new members of the Natural Language & Information Processing group. *Speaker Biographies* *Suchir Salhan* is a first-year PhD student supervised by Professor Paula Buttery, researching efficient and scalable machine learning techniques for small language models. *Paul...


Read more at: Title to be confirmed

Title to be confirmed

Friday, 26 April, 2024 - 12:00 to 13:00

Abstract not available


Read more at: The tradeoff governing efficient language model architectures

The tradeoff governing efficient language model architectures

Friday, 14 June, 2024 - 16:00 to 17:00

Recent work has proposed alternative language model architectures (e.g. RWKV, Mamba, Hyena) that are dramatically faster than Attention (e.g. 25x higher throughput). However, it’s unclear how switching to these new architectures might affect the behavior of language models when scaled up. In this talk, we’ll discuss our...


Read more at: Evaluating Large Language Models as Model Systems for Language

Evaluating Large Language Models as Model Systems for Language

Friday, 7 June, 2024 - 14:00 to 15:00

In this talk, we investigate the potential of Large Language Models to serve as model systems for language. Model systems for language should first and foremost perform the relevant function, i.e., use language in the right way. In the first part of the talk we investigate this claim in two ways. First, we critically look...


Read more at: Open-Endedness and General Intelligence

Open-Endedness and General Intelligence

Friday, 24 May, 2024 - 12:00 to 13:00

I will talk about our research towards developing increasingly capable and general AI. Central to this research direction is Open-Endedness: the attempt to create an AI that can endlessly improve and expand its capabilities. In particular, this talk will focus on three areas: training autonomous and robust agents that can...


Read more at: The intersection of Interpretability and Fairness

The intersection of Interpretability and Fairness

Friday, 17 May, 2024 - 12:00 to 13:00

A survey of methods of interpretability of neural networks: from gender bias mitigation to interpreting BERT embeddings in a psycholinguistic manner. Bio: Giuseppe Attanasio is a postdoctoral researcher affiliated with the Milan Natural Language Processing (MilaNLP) Lab at Bocconi University. His research primarily focuses...


Read more at: Title to be confirmed

Title to be confirmed

Friday, 31 May, 2024 - 12:00 to 13:00

Abstract not available


Read more at: Title to be confirmed

Title to be confirmed

Friday, 17 May, 2024 - 12:00 to 13:00

Abstract not available


Read more at: Title to be confirmed

Title to be confirmed

Friday, 3 May, 2024 - 12:00 to 13:00

Abstract not available


Read more at: Automated Fact-Checking of Climate Change Claims with Large Language Models

Automated Fact-Checking of Climate Change Claims with Large Language Models

Friday, 10 May, 2024 - 13:00 to 14:00

This talk introduces Climinator, a novel AI-based tool designed to automate the fact-checking of climate change claims. Utilizing an array of Large Language Models (LLMs) informed by authoritative sources like the IPCC reports and peer-reviewed scientific literature, Climinator employs an innovative Mediator-Advocate...