On March 7th and 8th, ANITI is organizing a workshop on explainability. This workshop, which will be held in B612, will allow to deepen the development of understanding what is an explainable AI.
As machine learning (ML) algorithms and architectures have become more and more complex, the explainability or interpretability of such algorithms and architectures has become a major scientific challenge. On the one hand meeting this challenge is basic to the scientific method; if one can't interpret or explain the behavior of an ML system, then one doesn't understand it. And if one can't understand the behavior of the system, then we can't offer guarantees about what it does; we can't, for instance, certify that the system is safe for use in situations where livelihoods or even lives may be at stake. Thus, interpretability becomes a requirement for ML systems that we want to use in application areas like transport, medicine, or economic or environmental forecasting.