This project will analyse fair learning and bias using tools from statistics and optimal transport theory and contribute to explaining ML program behavior, anomaly detection and making ML methods more robust.
It will provide means for removing biases from both data and machine learning algorithms by adding certain constraints to the learning process, for instance.
By isolating the effects of a known bias and observing changes in a program’s behavior after a certain bias has been removed, this work will also contribute to explaining ML program behavior and may figure also in anomaly detection and in determining whether a ML method is robust or not.
Programmes : IA acceptable, certifiable et collaborative
Thèmes : Fair learning, explicabilité
Jean-Michel Loubès, PR UT3, IMT