skip to content

Department of Computer Science and Technology

Date: 
Thursday, 4 June, 2026 - 17:00 to 17:45
Speaker: 
Valeria Ruscio
Venue: 
Lecture Theatre 2, Computer Laboratory, William Gates Building

Positional encodings are essential for transformer-based language models to understand sequence order, yet their influence extends far beyond simple position tracking. This talk explores the landscape of positional encoding methods in LLMs and reveals surprising insights about how these architectural choices shape model behavior.

We begin with the fundamental challenge: why attention mechanisms require explicit positional information. We then survey the evolution of encoding strategies, from sinusoidal approaches to modern techniques like RoPE, examining their architectural implications and trade-offs.

The talk delves into how these different encoding strategies fundamentally shape model architectures and representations. We analyze the specific limitations and trade-offs of each approach, examining how positional information propagates through transformer layers and influences the learned representations.

"Watch it remotely":https://meet.google.com/vch-pxrb-htz

Seminar series: 
Foundation AI

Upcoming seminars