Chair objectives

La Chaire DeepLEVER vise à faire progresser l’état de l’art dans l’application des concepts de raisonnement automatisé, techniques et outils de raisonnement sur les modèles d’apprentissage automatique (ML).

Concretely, we seek to find
rigorous explanations for the predictions made by ML models.

We aim to certify the robustness of operation of ML models.

This project envisions two main lines of research, concretely explanation and verification of deep ML models.

It will build on the remarkable progress made by automated reasoners based on SAT, SMT, CP, ILP solvers (among others) to further explainable and robust data driven AI (Hybrid AI for proving robustness for neural networks).

The reasons for these successes include improved solver technology, more sophisticated encodings, and also by exploiting key concepts that include abstraction refinement, symmetry identification and breaking among others.

Program Acceptable, certifiable & collaborative AI

Themes: Safe design and embeddability, Fair Learning, Explainability, Automated reasoning and decision making

Chair holder:
Joao Marquès Silva, external, University of Lisbon, PT


  • Martin Cooper (UT3, IRIT)
  • Emmanuel Hebrard (CNRS, LAAS)


Know more

AI logically

Portuguese researcher Joao Marques Silva set down in Toulouse two years ago. He continued his research there to explain the decisions made by algorithms. With the help of logical reasoning, he seeks to verify models of machine learning. Portrait of an artificial intelligence (AI) globetrotter.

Read the article in French

Ne manquez rien !

Inscrivez-vous pour recevoir l'actualité d'ANITI chaque mois.

Nous n’envoyons pas de messages indésirables !