ANITI'scientific seminars go on

Let's meet on 4th march From 3pm to 4pm with Stephan Wäldchen about Explaining Neural Network Classifiers: Hurdles and Progress

Le séminaire se déroulera en ligne via zoom

About Stephan Wäldchen

2012: Bachelor Thesis 2012 in Particle Physics at CERN
2015: Master Thesis in Quantum Information Theory at the Group of Jens Eisert,
Berlin
2015-2016: First PhD started with Klaus-Robert Müller at TU Berlin
2017: Second PhD started with Gitta Kutyniok at TU Berlin in Deep Learning Theory
May 2021: Working at the Zuse Institut Berlin with Sebastian Pokutta about
Interpretable AI

Explaining Neural Network Classifiers: Hurdles and Progress
Neural Networks have become the standard tools for high-dimensional decision making, e.g. medical imaging, autonomous driving, playing complex games. Even in high-stakes areas they generally operate as black-box algorithms without a legible decision process. This has birthed the field of explainable artificial intelligence (XAI). The first step for XAI-methods is to discern between the relevant and irrelevant input components for a decision.
>> In this talk, we formalise this idea by extending the concept of prime implicants from abductive reasoning to a probabilistic setting. This setting captures what many XAI practitioners intuitively aim for. We show that finding such small implicants, even approximately, is a computationally hard problem. Furthermore, good solutions depend strongly on the underlying probability distribution. We present strategies to overcome both problems and discuss what challenges still remain.
Revoir le séminaire

Ne manquez rien !

Inscrivez-vous pour recevoir l'actualité d'ANITI chaque mois.

Nous n’envoyons pas de messages indésirables !

en_GBEnglish