#développement sûr et embarquabilité #fair learning #explicabilité #Raisonnement automatique et décision

This project envisions two main lines of research, concretely explanation and verification of deep ML models. It will build on the remarkable progress made by automated reasoners based on SAT, SMT, CP, ILP solvers (among others) to further explainable and robust data driven AI (Hybrid AI for proving robustness for neural networks). The reasons for these successes include improved solver technology, more sophisticated encodings, and also by exploiting key concepts that include abstraction refinement, symmetry identification and breaking among others.

IA acceptable, certifiable & collaborative

Porteur :
Joao Marquès Silva, external, University of Lisbon, PT

Contact
jpms@ciencias.ulisboa.pt

Site
http://www.di.fc.ul.pt/~jpms/