skip to content

Department of Computer Science and Technology

Date: 
Friday, 24 January, 2020 - 12:00 to 13:00
Speaker: 
Eric Nalisnick (University of Cambridge)
Venue: 
FW26, Computer Laboratory
Abstract: 

Generative models are widely believed to be more robust to out-of-training-distribution inputs than conditional (i.e. predictive) models. In this talk, I challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses from those of house numbers, assigning a higher likelihood to the latter when the model is trained on the former. We posit that this phenomenon is caused by a mismatch between the model’s typical set and its areas of high probability density. In-distribution inputs should reside in the former but not necessarily in the latter. To determine whether or not inputs reside in the typical set, we propose a computationally efficient hypothesis test using the empirical distribution of model likelihoods. Experiments show that this test succeeds in detecting out-of-distribution inputs in many cases in which previously proposed threshold-based techniques fail.

Series: 
NLIP Seminar Series

Upcoming seminars