Animé par Matthias Hein – Université de Tübingen Allemagne – à 15h.

ABSTRACT: Current deep neural networks for image recognition do not know when they don’t know. Non-task related images are assigned to one of the classes with high confidence. Thus the machine learning system cannot trigger human intervention or go over into to a safe state when it is applied out of its specification. I will discuss our recent work to tackle this problem via (certifiably) adversarially robust out-of-distribution detection and show the usage of (robust) out-of-distribution detection in open-world semi-supervised learning.

BIO: Matthias Hein is Bosch endowed Professor of Machine Learning and  the coordinator of the international master program in machine learning at the University of Tübingen. His main research interests are to make machine learning systems robust, safe and explainable and to provide theoretical foundations for machine learning, in particular deep learning. He serves regularly as area chair for ICML, NeurIPS or AISTATS and has been action editor for Journal of Machine Learning Research (JMLR) from 2013 to 2018. He has been awarded the German Pattern recognition award, an ERC Starting grant and several best paper awards (NeurIPS, COLT, ALT).

Catégories : Événements