skip to content

Department of Computer Science and Technology

Read more at: Title to be confirmed

Title to be confirmed

Tuesday, 7 May, 2024 - 14:00 to 15:00

Abstract not available


Read more at: Title to be confirmed

Title to be confirmed

Tuesday, 4 June, 2024 - 14:00 to 15:00

Abstract not available


Read more at: A biography of Tor - a cultural and technological history of power, privacy, and global politics at the internet's core

A biography of Tor - a cultural and technological history of power, privacy, and global politics at the internet's core

Friday, 26 April, 2024 - 17:00 to 18:00

In the seminar, Dr Ben Collier will introduce the new book, Tor: From the Dark Web to the Future of Privacy (MIT Press, 2024). *SPEAKERS* * Chair: Prof Alice Hutchings * Speaker 1: Dr Ben Collier * Speaker 2: Professor Steven Murdoch 5:00pm, 26th April 2024 LT2 - William Gates Building 15 JJ Thomson Avenue, Cambridge, CB3...


Read more at: Title to be confirmed

Title to be confirmed

Tuesday, 30 April, 2024 - 14:00 to 15:00

Abstract not available


Read more at: Legal Concepts in Cybersecurity and Privacy for Digital & Smart Environments

Legal Concepts in Cybersecurity and Privacy for Digital & Smart Environments

Tuesday, 2 April, 2024 - 14:00 to 15:00

This talk will cover trends in the current cybersecurity and privacy legal environment that apply to digital, marketing, & smart settings. As technologists and organizations work on digital & smart innovations using emerging technologies that are poised to change the way we interact with goods and each other, legal...


Read more at: How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions

How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions

Tuesday, 5 March, 2024 - 14:00 to 15:00

Large language models (LLMs) can "lie", which we define as outputting false statements despite "knowing" the truth in a demonstrable sense. LLMs might "lie", for example, when instructed to output misinformation. Here, we develop a simple lie detector that requires neither access to the LLM's activations (black-box) nor...


Read more at: Mysticeti: Low-Latency DAG Consensus with Fast Commit Path

Mysticeti: Low-Latency DAG Consensus with Fast Commit Path

Tuesday, 12 March, 2024 - 14:00 to 15:00

This talk introduces Mysticeti a byzantine consensus protocol with low-latency and high resource efficiency. It leverages a DAG based on Threshold Clocks and incorporates innovations in pipelining and multiple leaders to reduce latency in the steady state and under crash failures. Mysticeti is the first byzantine protocol...


Read more at: Applications of proofs to network security

Applications of proofs to network security

Tuesday, 23 April, 2024 - 14:00 to 15:00

Blockchains have motivated a surge of research and development into succinct probabilistic proofs. As proof constructions have gotten dramatically more efficient, entirely new applications have become feasible in other areas as well. This talk will discuss two proposed applications in the area of network security. First...


Read more at: Owl - an augmented password-authenticated key exchange protocol

Owl - an augmented password-authenticated key exchange protocol

Tuesday, 20 February, 2024 - 14:00 to 15:00

In this talk, I will first review three decades of research in the field of password-authenticated key exchange (PAKE). PAKE protocols can be categorized into two types: balanced and augmented schemes. I will share my experience of designing a balanced PAKE called J-PAKE in 2008 (joint work with Ryan). Today, J-PAKE has...


Read more at: Characterizing Machine Unlearning through Definitions and Implementations

Characterizing Machine Unlearning through Definitions and Implementations

Thursday, 29 February, 2024 - 14:00 to 15:00

The talk presents open problems in the study of machine unlearning. The need for machine unlearning, i.e., obtaining a model one would get without training on a subset of data, arises from privacy legislation and as a potential solution to data poisoning or copyright claims. The first part of the talk discusses approaches...