skip to content

Department of Computer Science and Technology

Date: 
Thursday, 20 February, 2020 - 15:00 to 16:00
Speaker: 
Carmela Troncoso (EPFL)
Venue: 
Lecture Theatre 2, Computer Laboratory
Abstract: 

Abstract:
In a machine-learning dominated world, users' digital interactions are monitored, and scrutinized in order to enhance services. These enhancements, however, may not always have the benefit and preferences of the users as a primary goal. Machine learning, for instance, can be used to learn users' demographics and interests in order to fuel targeted advertisements, regardless of people's privacy rights; or to learn bank customers' behavioral patterns to optimize the monetary benefits of loans, with disregard for discrimination. In other words, machine learning models may be adversarial in their goals and operation. Therefore, adversarial machine learning techniques that are usually considered undesirable can be turned into robust protection mechanisms for users. In this talk we discuss two protective uses of adversarial machine learning, and challenges for protection arising from the biases implicit in many machine learning models.

Bio:
Carmela Troncoso is an Assistant Professor at EPFL where she leads the Security and Privacy Engineering (SPRING) Laboratory. Her research focuses on privacy protection, with particular focus on developing systematic means to build privacy-preserving systems and evaluate these system's information leakage.

Series: 
Centre for Mobile, Wearable Systems and Augmented Intelligence Seminar Series

Upcoming seminars