Séminaire scientifique animé par Martin Cooper (IRIT), vendredi 5 février à 15h.

ABSTRACT

Explaining decisions is at the heart of explainable AI. We investigate the computational complexity of providing a formally correct and minimal explanation of a decision taken by a classifier. In the case of threshold (i.e. score-based) classifiers, we show that a complexity dichotomy follows from the complexity dichotomy for languages of cost functions.In particular, submodular classifiers allow tractable explanation of positive decisions, but not negative decisions (assuming P not equal to NP). This is an example of the possible asymmetry between the complexity of explaining positive and negative decisions of a particular classifier. Nevertheless, there are large families of classifiers for which explaining both positive and negative decisions is tractable, such as monotone or linear classifiers. We extend tractable cases to constrained classifiers (when there are constraints on the possible input vectors) and to the search for contrastive rather than abductive explanations. Indeed, we show that tractable classes coincide for abductive and contrastive explanations in the constrained or unconstrained settings.”

Watch the video of the seminar

Ne manquez rien !

Inscrivez-vous pour recevoir l'actualité d'ANITI chaque mois.

Nous n’envoyons pas de messages indésirables !

en_GBEnglish