On March 7th and 8th, ANITI is organizing a workshop on explainability. This workshop, which will be held in B612, will allow to deepen the development of understanding what is an explainable AI.
As machine learning (ML) algorithms and architectures have become more and more complex, the explainability or interpretability of such algorithms and architectures has become a major scientific challenge. On the one hand meeting this challenge is basic to the scientific method; if one can't interpret or explain the behavior of an ML system, then one doesn't understand it. And if one can't understand the behavior of the system, then we can't offer guarantees about what it does; we can't, for instance, certify that the system is safe for use in situations where livelihoods or even lives may be at stake. Thus, interpretability becomes a requirement for ML systems that we want to use in application areas like transport, medicine, or economic or environmental forecasting.
Joao Marques-Silva : Logic based explanations
présentation de David Vigouroux - Formal interpretability
Leila Amgoud - Axiomatic Foundations of Explainability
Serena Villata Towards Natural Language Explanatory Argument Generation: Achieved Results and Open Challenges
Lucas de Lara From structural to optimal transport counterfactuals
Karima Maklouf, Sami Zhioua, Causality for Fairness and Explainability in ML models
Thibault Laugel, Understanding prediction discrepancies in ML classifiers
Philippe Muller : Explaining semantic representations in natural language processing
End of the workshop
Following the workshop, ANITI organizes an afterwork on "Explainability". Tuesday, 8th march, from 5 to 6.30 pm in the B612