Depuis sa création, ANITI organise des focus techniques à destination de ses partenaires industriels. Retrouvez la liste des librairies produites par ANITI et accessibles en open source.
Explainability and Model Robustness
- XPLIQUE is a Neural Networks Explainability toolbox, composed of various modules implementing various methods such as Attribution Methods, Feature Vizualization, Concepts methods (replay – slides – github)
- GEMS.AI is a A Python package for AI fairness and interpretability. The approach is to build counterfactual distributions that permit answering “what if?” scenarios. The key principle is that we stress one or more variables of a test set and we then observe how the trained machine learning model reacts to the stress (website – replay – slides – github)
Computer Vision performance
- Advanced tools for Computer Vision using bio-inspired feedback mechanisms (replay – slides – github)
- Improve your Computer Vision Model Robustness with PREDIFY a brain-inspired framework (replay – slides – github)
- An evolutionary testing simulation with Out Of Distribution Images, SimOOD (replay – slides – github)