The jury of independent international experts convened by ANITI last July selected the Chairs of Excellence projects that will be included in the proposal that the institute will submit to the ANR at the end of September as part of the "IA Cluster:"pôles de recherche et de formation de rang mondial en intelligence artificielle". See the list below.
Uncertainty Quantification for Physical and Artificial Intelligence systems
Principal Investigator: Bachoc, François – UT3 / IMT
Co-Chair(s): Bartoli, Nathalie – ONERA / DTIS, Novello, Paul – IRT Saint-Exupéry
In the last decades, uncertainty quantification has proved to be very efficient for analyzing phys ical systems, in fields such as aeronautics, spacecraft, and energy. More recently, it has also been demonstrated that uncertainty quantification methods can be beneficially transferred from physi cal applications to Artificial Intelligence (AI) systems, in particular for explainability and fairness analysis.
The UQPhysAI chair will improve the fundamental understanding, and develop robust and broadly applicable algorithms, for two cornerstones of uncertainty quantification: sensitivity anal ysis and active learning with Bayesian models. For sensitivity analysis, the chair will focus on causal analysis, high dimension, complex inputs/outputs and robustness. For Bayesian models and active learning, the chair will focus on nested and coupled settings, and on exploiting new and more diverse uncertainty quantification techniques.
The methodological developments will be fueled by and applied to realistic problems and data for both physical and AI systems, in collaborations with industrial partners: ONERA, Airbus, Liebherr and Vitesco.
Facing Low Resource Natural Language Processing
Principal Investigator: Braud, Chloé – CNRS / IRIT
Co-Chair: Benamara, Farah – UT3 / IRIT
Natural Language Processing is a subfield of Artificial Intelligence that aims at building computational models of human language. Current approaches rely on machine learning algorithms that need data for training and evaluation. However, annotating data is expensive and time-consuming: in consequence, annotations are not much available for most languages, specialized domains, and for specific high-level tasks. This leads to issues with robustness – when systems are unable to generalize to new situations –, and fairness, since NLP applications only work for a limited range of human language productions. This ANITI2.0 Starting Chair proposal intends to tackle these issues by leveraging Weak Supervision, a learning paradigm that allows the development of hybrid systems and showed promising results on several NLP tasks.
The general idea is to use labeling functions based either on rules derived from expert knowledge or on statistical systems predictions, thus allowing to take advantage of all the available sources of information, even if noisy. Within our Chair FLowReN (Facing Low-Resource NLP), we will extend this paradigm to high-level, semantic-pragmatic tasks that most suffer from data scarcity. We will also investigate multilingual, multimodal and multitask approaches, to both enlarge the sources of weak supervision and enhance performance for low-resource languages and domains. The theoretical results will serve in real world use cases as given by two industrial partners that will support this Chair: Airbus and Liebherr. In this context, the ability to adapt to few annotated data within specialized domains is crucial. Finally, we aim at exploring the explicability of the end systems, through the control over the entire learning process. This project based on hybrid learning aims at bringing new state of the art on major NLP tasks while going a step further toward robust and fair AI.
Mathematical Approaches for Deep Learning, representation Learning, And high-Dimensional Statistics
Principal Investigator: Chhaibi, Reda – UT3 / IMT
Co-Chair(s): Gamboa, Fabrice – UT3 / IMT, Lagnoux, Agnès – UT2J / IMT, Pellegrini, Clément – UT3 / IMT
Representation learning is a fundamental aspect of artificial intelligence that involves automatically extracting a compact and useful representation of input data for classification, clustering, and prediction tasks. However, this can be a challenging task in deep learning, where overparameterization is a concern, and in high-dimensional statistics, where sparse structures may be hidden. In many industrial settings, computer simulations generate complex data that require feature selection for effective analysis. To overcome this challenge, we propose leveraging powerful mathematical tools for efficient representation selection, including Sensitivity Analysis (SA), signatures from Rough Path Theory (RPT), and Random Matrix Theory (RMT). Sensitivity analysis enables the identification of the most informative features by selecting the variables that contribute the most to the variability in the data. Signatures from Rough Path Theory provide a universal and expressive representation for time series data that captures complex temporal dependencies and enables effective analysis. Finally, RMT can estimate the number of significant principal components in high-dimensional datasets and provide insight on the stability of neural networks. By utilizing these mathematical tools for representation learning, our goal is to facilitate efficient and effective analysis of complex industrial data, ultimately supporting data-driven decision making.
LArge Tensors for daTa analysIs and maChine lEarning
Principal Investigator: Goulart, Henrique – INP / IRIT
Co-Chair: Mai, Xiaoyi – UT2J / IMT
By exploiting the multilinear and low-dimensional structure of data found in several applications, or of their high-order statistics, tensor-based methods have established themselves as one powerful and effective toolkit in machine learning and data science. This is particularly true for unsupervised learning tasks, thanks to the strong uniqueness properties exhibited by standard tensor models such as the canonical polyadic decomposition. However, this approach relies on approximate decomposition algorithms whose actual performance is, to a large extent, not well understood. Moreover, existing performance guarantees oftentimes rely on conditions which are too stringent or whose vailidity in practice is hard to assess, especially in the common modern regime where the number of observations and their dimension are of comparable size. This chair aims at bridging these gaps, by carrying out a precise quantification of the performance of tensor-based machine learning methods, andmore generally of tensor decompopsition algorithms, in the large-dimensional regime. In particular, it willbuild upon and extend an approach recently introduced by the PI for the study of large random tensor models via random matrix theory. As a result, new results on fundamental questions related to the properties and estimation of random tensor models will be derived. We will then capitalize on the obtained results in orderto develop improved and provably reliable tensor-based machine learning methods and tensor decomposition algorithms for large-dimensional data.
Hybrid, Interpretable Machine Learning
Principal Investigator: Wilson, Dennis – ISAE / ISAE
Co-Chair : Almar, Rafaël – IRD / LEGOS
The HIML project is an ambitious initiative aimed at combining the strengths of symbolic regression and deep learning to create Hybrid Symbolic Regression (HSR) for interpretable machine learning models. With a focus on high-stakes domains, such as environmental modeling, the project seeks to develop models that are both accurate and easily understood by domain experts. The research is divided into two main directions.
The first research direction is centered on theoretical advancements in HSR, utilizing Large Language Models (LLMs) to evolve code and deep learning as a surrogate fitness to guide the evolutionary process.
The goal is to improve the accuracy and interpretability of models by working directly with code and exploring parts of the data not yet covered by the current population of functions.
The second research direction emphasizes the practical application of HSR to enhance environmental models in two crucial areas: coastal risk assessment and ENSO (El Niño-Southern Oscillation) modeling. By prioritizing interpretability and working closely with domain experts, the HIML project aims to develop more accurate, actionable models that can inform decision-making in the face of climate change and other environmental challenges.
In summary, the HIML project sets out to make significant contributions to both theoretical and applied aspects of machine learning, particularly in the context of environmental modeling. By leveraging the power of Hybrid Symbolic Regression, the project aspires to create more accurate and interpretable models that can effectively inform high-stakes decisions and foster better communication among scientists, policymakers, and other stakeholders. The success of this project holds the potential to transform the way machine learning is employed in critical domains, enabling more responsible and effective decision-making in the face of global challenges
Human-Centered AI for Argument-based Deliberation
Principal Investigator(s): Amgoud, Leila – CNRS / IRIT
Co-Chair(s) : Lagasquie-Schiex, Marie-Christine – UT3 / IRIT, Tamine, Lynda – UT3 / IRIT, Zarate, Pascale – UT Capitole / IRIT, Ben Kraiem, Ines – Sogeti/Capgemini / SogetiLabs
Recognized as vital in a group decision-making process, deliberation allows stakeholders discussing and reaching agreements on controversial issues before making ultimate decisions. It brings several benefits, one of which is ensuring well-informed and well-accepted decisions. Its backbone is argumentation, which consists of justifying claims by arguments, i.e., reasons behind claims. The greatest challenges facing deliberation systems are identifying, analysing, evaluating, and aggregating large sets of interacting arguments, generally of disparate types, and solving potential disagreements between stakeholders.
The HuCAD project promotes a novel holistic paradigm fostering human-machine collaboration for effective deliberations. It aims to develop AI systems that not only address all of the above challenges, but also enhance argumentation capabilities of stakeholders by suggesting relevant arguments, retrieved from the web, and facilitating a good grasp of a debate thanks to an automatically generated and structured synthesis of the most salient arguments. The end goal of HuCAD is a suite of theoretical developments, namely (i) a sufficiently general formal theory of argumentation which supports real-world arguments, (ii) language models grounded on computational argumentation and tailored for argument retrieval and argumentation synthesis, (iii) an advanced theory of multiple criteria decision making founded on the two previous developments; and an application in real-life scenarios through a collaboration with the industrial partner CAPGEMINI (ex. SOGETI), experienced in collaborative decision making. The consortium includes world-renowned academic experts in computational argumentation, decision theory and information retrieval.
The project has multiple applications with strong social and economic impact, among which but not limited to, debate platforms which abound on the web for enhancing opinion formation, new-generation argumentative search engines able to foster debates by providing in-context arguments pro and con in either a list or structured synthesis forms, beyond the classical ten-blue links form, and obviously collaborative decision making platforms which are increasingly used by companies and institutions.
Frugal Reinforcement Learning for Stochastic Networks
Principal Investigator(s): Ayesta, Urtzi – CNRS / IRIT
Co-Chair(s): Jonckheere, Matthieu – CNRS / LAAS
Markov decision processes (MDPs) and their equivalents in reinforcement learning have been highly effective in solving large-scale problems over the past two decades. However, their success often depends on exceptional computational resources, limiting their applicability in contexts with restricted data volume or computing power.
In contrast with this general trend, RL4SN aims to develop learning algorithms suitable for situations where data is scarce or computational power is limited, with a focus on stochastic networks. These are problems that are relevant from a practical perspective -for instance data centers provide the main infrastructure to support Internet applications and scientific computationsbut for which the RL techniques developed so far are not directly applicable. Stochastic networks may have sparse and rare non-zero rewards, not all policies are stable (in the sense of keeping the number of jobs bounded), optimal policies exhibiting clear structures in disjoint regions, and so-called index policies are known to perform very well. RL4SN is motivated by the potential for significant improvement that these properties offer to learning algorithms.
The main objective of RL4SN is thus to leverage the specific structures of the underlying MDPs of stochastic network problems to develop tailored learning algorithms. To achieve its ambitious objective, RL4SN is organized in three main tasks, each addressing a separate objective: (i) improving exploration of learning algorithms in stochastic networks, (ii) creating more efficient learning algorithms with lower data consumption requirements, and (iii) developing set algorithms to efficiently learn index policies in stochastic networks.
Center for Moral Artificial Intelligence
Principal Investigator: Bonnefon, Jean-François – CNRS / TSE-R
Research on ethical artificial intelligence has primarily focused on value alignment, the challenge of ensuring intelligent machines operate in accordance with human moral norms and values. This approach assumes that moral norms are fixed, being either prescriptive, defined by experts, or descriptive, discovered through behavioral science. While this perspective has been valuable, it also limits the exploration of AI’s potential to shape moral norms. New research is needed to consider the dynamic nature of moral norms and the role of intelligent machines in strengthening, challenging or transforming them. The Center for Moral Artificial Intelligence (MoAI) aims to investigate the impact of intelligent machines on human moral norms and values using a multidisciplinary approach that combines moral psychology, experimental economics, and computer science. MoAI projects begin by identifying which moral values are potentially affected by a technology and why, using insights from moral psyschology; they then use experimental economics to design incentive compatible protocols for measuring the technology’s impact on these values. Finally, computer science is employed to develop a simplified or prototyped version of the technology for use in experiments, with the added challenge that this technology often does not exist yet.
Disaster risk prediction and multivariate anomaly detection
Principal Investigator(s): Daouia, Abdelaati – TSE GE / TSE-R
Co-Chair(s): Sabourin, Anne – Université Paris Cité, Stupfler, Gilles – Université d’Angers
Disaster or global risk assessment is concerned with the analysis of rare events that carry the potential of serious impacts on our health, the environment or the economy. This includes systemic risk mitigation which is crucial in finance and insurance, especially with the advent of climate, epidemiological, and cybersecurity risks. Available methods typically break down in realistic settings, where the data can feature heavy tails with various forms of heterogeneity (different sample sizes, different marginal distributions including heteroskedasticity, etc.), dependence across time and/or space, non-stationarity in time due to economic crises and climate change, and intricate covariates representing microeconomic characteristics or describing e.g.
climate, biosphere and environmental states. Another major challenge arises when the number of response variables is large in extremal regression models, where computational constraints on the sample size and theoretical difficulties in handling high-dimensional information plague the accuracy of prediction and inference about tail risk. Regression quantiles, which are the usual metrics for quantifying such conditional risk, are themselves often criticized for their lack of alertness and reactivity to the severity of extreme (disastrous) observations. Our project attempts to solve these difficulties through the lens of extreme value theory combined with machine learning methods (random forests, gradient tree boosting and deep neural networks) or with dimension reduction techniques, so as to propose least asymmetrically weighted squares regression models that have the ability to extrapolate beyond the range of observed values and to model complex covariate dependencies in the predictors. Our applications include risk assessment of cyber insurance on data breaches, as well as risk prediction of complex environmental and climatic processes. They are also concerned with anomaly detection in industrial problems.
Innovations in the Wake of COVID-19
Principal Investigator(s): Chen, Daniel L. – CNRS / TSE-R
Slow justice delivery is associated with a poor business climate and can have serious economic and welfare consequences (World Bank, 2017). When institutional barriers limit access to judicial resolutions, victims of crimes are incapable of pursuing restitution, beneficiaries of government services cannot benefit from needed resources, and mundane administrative tasks become insurmountable. An estimated 1.5 billion people around the world are unable to obtain justice for administrative, criminal, or civil justice problems (World Justice Project, 2019). This global challenge of unmet justice has been severely exacerbated by the COVID-19 pandemic. This project creates knowledge and evidence on how to build more resilient justice systems in the wake of COVID-19 through tech-enabled innovations.
My team will develop original governance and institutions research with policy implications for COVID-19 recovery efforts of not only judiciaries in Kenya, Peru, Pakistan, Brazil, and India where the research will be conducted, but also other judiciaries around the world. In particular, the insights of our research on various e-justice innovations will be relevant to developing countries where legal capacity has been most affected by COVID-19. Open source tools will be released such that other countries can adapt and use them to support struggling judiciaries.
The questions are threefold: How can we improve productivity and effectiveness of courts dealing with backlogs of cases? How can we expand access to justice for citizens? Can data science and artificial intelligence reduce information frictions and unlock the positive effects of justice on economic development? We will evaluate different e-justice innovations with the goal of strengthening judiciaries to deal with the growing backlog of cases and low citizen access to courts. The results will help us to understand which innovations work to build resilience in justice systems around the world in the wake of COVID-19.
Principal Investigator(s): Dehais, Frédéric – ISAE / ISAE
Co-Chair(s): Grosse-Wentrup, Moritz – TU München
Brain-Computer Interface (BCI) technology has the potential to revolutionize human-machine interaction and improve the lives of people with disabilities. However, end-user related issues have hindered the widespread adoption of BCIs. State-of-the-art machine learning algorithms for BCIs require large training datasets, leading to long and tedious calibration times that must be repeated before each use. To address these limitations, Professors F. Dehais (PI, ISAE-SUPAERO) and M. Grosse Wentrup (Co-Chair, Univ Vienna) will combine their respective expertise in neuroscience and AI to design seamless and robust BCIs in real-world environments. The team will optimize the design of aperiodic visual stimuli (code-VEP) using deep learning techniques to probe user intentions for self-paced interactions, use unobtrusive sensors to maximize user comfort, and implement frugal machine learning algorithms that can be trained with less than one minute of data with low computational cost. We will collect basic neurophysiological data from over 100 participants in order to optimize the design of our machine learning algorithms. Our objective is to achieve a minimum accuracy of 95%. Experiments with our online BCI will then be run with stroke patients in Vienna and pilots in Toulouse. This project aims to improve the quality of life for individuals with disabilities and enhance safety and efficiency in aeronautical applications. Furthermore, the collected data, machine learning algorithms, and software development will be shared with the community, promoting open science.
AI for Smart and Sustainable Air Traffic Management and Air Mobility
Principal Investigator: Delahaye, Daniel – ENAC / ENAC LAB
The proposed chair is targeted to addressed two main pillars of future air mobility which are stronly related: AI for Sustainable Air Operations and Air Mobility Air transportation is currently facing environmental challenges for which AI may bring some solutions in order to reduce the overall aviation impacts. It is considered that air operation optimization may reduce CO2 emission by 10%. In addition non-CO2 aviation impact (contrails which participate to the radiative forcing) may also be reduced by new optimization aircraft trajectories at large scale. Noise abatement issues will be also considered in this research. We propose in this chair to develop new AI decision support tools for optimizing air operations (trajectory planning, etc…), in order to match those new challenges . Such sustainable trajectories (continuous climb and descent, fuel optimal trajectories in the presence of wind, etc…) do not stick to the airways network and will then more difficult to manage for air traffic controllers (ATCO) . We propose also to develop new AI decision support tools for helping ATCO to manage such trajectories by increasing the level of automation of the ground segment. In addition ML algorithms will be investigated to improve the prediction of air operations (trajectory prediction, etc…) but also the persistent contrail favorable areas in the airspace.
AI for ATM Automation If we compare the onboard side and the ground side, on can notice that automation has been much more developed in the cockpit than in the control rooms. This is due to the difficulty to bring automation in the controller task which consist to manage conflict detection and resolution between aircraft. Many efforts have been done in the past in order to develop decision support tools to help controllers to manage the traffic and then to enhance the capacity of the system. Unfortunately few improvements have been done in this direction due to the lack of certification of such algorithms and we propose to investigate this field with the new development of the artificial intelligence. The main objective of this research is to develop decision support tool in order to help controller to manage sustainable trajectories (continuous climb, etc..) and then to improve the overall capacity and sustainability of the air transportation system. In order to reach this goal we propose to develop trustable AI algorithms with a strong focus on the associated explicability and robustness for such a critical application. In addition, improvement of trajectory prediction algorithm for conflict detection will be also addressed in this research.
Advances in majorization-minimization algorithms for optimization with non-quadratic loss functions
Principal Investigator(s): Fevotte, Cédric – CNRS / IRIT
Co-Chair(s): Cazelles, Elsa – CNRS / IRIT, Soubies, Emmanuel – CNRS / IRIT
Many problems in machine learning and signal processing involve the optimisation of a loss function with respect to a set of parameters of interest. A common choice is the quadratic loss because it enjoys convenient mathematical properties that make it prone to optimisation. However, from a modelling point of view, the quadratic loss underlies a Gaussian model that does not always comply with the geometry of the data. This is the case when dealing with nonnegative, integer-valued or binary data for which non-quadratic losses are more suitable.
The aim of AMINA is to advance the theory and methodology of optimisation with non-quadratic loss functions using the framework of majorisation-minimisation (MM). MM consists in iteratively building and minimising a locally tight upper bound of the loss. In other words, it resorts to the iterative optimisation of a local approximation. This is an intuitive and yet powerful optimisation framework that does not require stringent assumptions. MM algorithms decrease the value of the loss at every iteration and do not require tuning parameters. Well-designed upper bounds can finely capture the local curvature of the loss, resulting in efficient updates. Though MM can be traced back to the 1970s, it has enjoyed a significant revival in the last ten years. I played a part in this revival with highly cited articles about MM for nonnegative matrix factorisation (NMF) with the beta-divergence, a wide class of loss functions of important practical value.
AMINA will tackle challenging problems related to the design and convergence ofMMalgorithms in four innovative machine learning and signal processing settings: 1) non-alternating updates for NMF, 2) phase retrieval with the beta-divergence, 3) unbalanced optimal transport for audio interpolation, 4) stochastic MM for deep learning. Designing efficient optimisation algorithms with convergence guarantees is a crucial step in building trustworthy AI systems.
Designing Artificial Social Reasoners
Principal Investigator: Lorini, Emiliano – CNRS / IRIT
Co-Chair: Herzig, Andreas – CNRS / IRIT
The DeSiRe advanced chair (AC) has the ambition to produce a number of languages, logic-based models and algorithmic solutions for endowing an artificial conversational agent with advanced social reasoning capabilities coupled with learning capabilities. The project is both, theoretical and practical.
It goes from the conceptual and formal analyses, including the proof-theoretic and complexity aspects of the developed languages, to the implementation of the reasoning algorithms in the architecture of a conversational agent and, finally, to the validation of the models and the algorithmic solutions with actual end-users. In addition to the PI, the chair will involve one associate researcher and one collaborator from IRIT, as well as three external collaborators from ENS Rennes and from the Law Departement of the University of Bologna, Italy. It will also involve an industrial partner specialized in conversational and emotional AI. It will be in charge of the technological transfer of the models and algorithms developed in the context of the chair into the conversational agent.
Machine Learning for Sustainable International Development (ML4SID)
Principal Investigator: Hidalgo, Cesar – UT / IRIT
Co-Chair: Stojkoski, Viktor – University Ss. Cyril and Methodius in Skopje / Factuly of Economics
Our understanding of sustainable international development has benefited from recent advances in economic complexity—a field using machine learning to explain international and regional differences in economic growth, inequality, and emissions. Yet, despite important progress, these advances are constrained by data availability and by the limited adoption of machine learning methods among people working on regional and international development. The ML4SID project will leverage recent innovations in machine learning (ML) to advance data and methods for sustainable international development. We will use machine learning to create and or augment datasets to explore key understudied activities, such as digital trade and to map emissions along value chains. We will then use these datasets to deepen our understanding of questions such as the role played by digital technologies on the decoupling between economic growth and greenhouse gas emissions and to explore new economic complexity methods leveraging modern neural network architectures. Finally, we will spell out the implications of these findings in efforts to enhance economic data observatories (such as oec.world and datamexico.org, which combine to ~1M monthly users) and by providing policy advice aligned with the U.N.’s SDGs and the European agenda on the twin transition (digital + environmental). By advancing the use of machine learning for sustainable development we will combine two key academic strengths of the Toulouse research ecosystem (AI + Economics) while contributing to the formation of a new generation of researchers specialized in the use of machine learning for sustainable international development.
Guaranteed and frugal deep learning<br>
Principal Investigator: Malgouyres, François – UT3 / IMT
Co-Chair: Landsberg, Joseph – Texas A&M University
The project aims at providing guarantees for deep learning methods and building new frugal architectures.
It consists of three complementary parts.
In the first part, we will seek to establish theoretical guarantees, which can be computed for moderatesize problems, for the learning of deep ReLU networks. As deep neural networks are used in a context where the number of examples is generally smaller than the number M of parameters of the network, the guarantees are based on properties valid on a subset of parameters, leading to the description of functions whose ‘complexity’ C is much smaller than the number of parameters. In this work, we will study different notions of local complexities, similar to a ‘local pseudo-dimension’, based on a geometric analysis of neural networks.
Another complementary and simpler approach is based on the quantification of predictive uncertainty a posteriori, once the model is trained. The idea is to statistically evaluate the uncertainty in an efficient and guaranteed way. General techniques (called conformal prediction) exist for ‘blackbox’ models. We will develop variants that exploit the structure of the models and tasks in a tigher way (guaranteed tuning of multiple hyperparameters, multi-task learning setting).
For embeddability purposes, we will study quantized networks, associated models, and algorithms.
We will generalize and adapt a recent study providing convergence guarantees for the ‘straightthrough- estimator’, the main algorithm used to optimize quantized weights in deep learning. We will also continue ongoing work on quantized aware training and robustness, and study new low-bit matrix models, leading to more expressive networks at constant memory or computational cost. The methodological developments will be tested on time series and object-detection tasks.
eXplainability science in artifiCIal intelligENCE
Principal Investigator: Marques-Silva, Joao – CNRS / IRIT
Co-Chair(s): Hurault, Aurélie – INP / IRIT, Cooper, Martin C. – UT3 / IRIT
It is widely accepted that explainable artificial intelligence (XAI) will represent one critical component to deliver trustworthy AI. Unfortunately, most of the existing work on XAI offers no guarantees of rigor.
This means that explanations can be incorrect and so can induce human decision makers in error. Evidently, methods that give incorrect answers can never be worthy of trust. In contrast to the popular and possibly erroneous approaches to explainability, the DeepLever Chair of ANITI 1.0 proposed and investigated rigorous explanations, under the name of formal XAI. Formal explanations are characterized by a logic-based, modelprecise definition of explanation, and so offer absolute guarantees of rigor. The strong progress observed in formal XAI in recent years, in good part led by researchers of the DeepLever Chair of ANITI 1.0, also uncovered important challenges, critical to the future success of formal explainability. The Xcience Chair of ANITI 2.0 addresses these challenges, but also the same challenges in sub-domains of AI other than Machine Learning (ML). The overarching challenge of the Xcience Chair of ANITI 2.0 is to metamorphose the initial successes of the DeepLever Chair into a new, rigorous and ubiquitous Science of Explainability, that delivers trustworthiness.
Evolution of galaxies using Machine Learning
Principal Investigator: Moultaka, Jihanne – Ministère Educ. Nat. / IRAP
Co-Chair: Fraix-Burnet, Didier – CNRS / IPAG
This proposal aims at studying the evolution of galaxies using ML algorithms. We have initiated a collaboration with D. Fraix-Burnet (co-chair and astrostatistician at IPAG, Grenoble) three years ago, to make classifications of galaxies using their spectra, a novel approach in the field of astrophysics. To achieve this goal we are using an unsupervised classification method called Fisher-EM developed by Ch. Boubeyron (contributor and full Professor of Statistics at University Côte d’Azur) that is well adapted for high dimensional data as are the spectra of galaxies. In the framework of this project, D. Fraix-Burnet and I co supervised a PhD student who will defend his thesis early september and I am supervising a master student since early March at IRAP in Toulouse. The results of this work are very exciting (two papers were published and two other papers are in preparation). In the coming years, we want to explore other IA
methods and all kinds of galaxy data to better constrain their physics i.e. spectra at different electromagnetic wavelengths (or frequencies), as well as physical observables obtained from photometry, spectroscopy and imaging. This is made possible with the huge amount of data that is available thanks to the multi-wavelength surveys of galaxies observed with several telescopes. Our work will shed light on the evolution of galaxies and their making of processes wich is still an open question in astronomy today. To achieve these goals, we need to hire a PhD student in 2024 and a postdoc in the IA domain who would be at the interface between our needs (the data) and the IA methods.
Reinforcement Learning on a Diet
Principal Investigator: Rachelson , Emmanuel – ISAE / DISC
Co-Chair(s): Bertoin, David – IRT Saint-Exupéry / Smart Systems, Albore, Alexandre – ONERA / DTIS
Deep reinforcement learning (RL) – learning optimal behaviors from interaction data, using deep neural networks – is often seen as one of the next frontiers in artificial intelligence. While current RL algorithms do not escape the relentless pursuit of larger models, bigger data and more computation demands, we posit realworld impacts of RL will also stem from algorithms that are relevant in the small data regime, on reasonable computing architectures. RL is at a crossroads where one wishes to retain the versatility and representational abilities of deep neural networks, while coping with limited data and resources. Under such real world limitations, understanding how to preserve algorithmic convergence properties, robustness to uncertainties, worst case guarantees, transferable features, or behavior explanation elements is an open field. Hence, we endeavor to put RL on a diet, in order to reach a better understanding of frugal RL: its theoretical foundations, the many ways one can compensate for limited data, the sound algorithms one can design, and the practical impacts it can have on the many real world applications where, intrinsically, data is costly and resources are limited, ranging from autonomous robotics to personalized medicine.
User-Centered Interactive Machine Learning for 3D Point Cloud Analysis
Principal Investigator: Serrano, Marcos – UT3 / IRIT
Co-Chair(s): Barthe, Loïc – UT3 / IRIT, Bredin, Hervé – CNRS / IRIT, Perez Sanagustin, Mar – UT3 / IRIT
The advent of point cloud acquisition techniques and devices, both in industry and research, allows the generation of 3D models for a variety of application domains: geographic information systems, civil engineering, cultural heritage, indoor mapping, among others. These point cloud acquisition applications generate massive clouds with hundreds of millions of points. However, processing large scale acquired point clouds still remains an open problem: the unstructured nature of point clouds, their dependence to scale, rotations and translations, their third dimension and the massive nature of real acquired data makes current machine learning approaches mostly unpractical on most of actual problems. Our objective is to overcome the limitations generated by the large number of annotated inputs and the long time required for training learning models by relying on interactive learning approaches. Our project will contribute with novel explainable active learning methods combining interactive machine learning and active learning, as well as novel 3D interaction techniques exploiting point parameters to improve their classification by the machine learning model. Our project will be conducted by complementary and renowned experts in Human-Computer Interaction, Computer Graphics, Artificial Intelligence and Human Learning. We will also collaborate with two leading companies in data point acquisition and processing.
Anomaly Detection and Diagnosis
Principal Investigator: Travé-Massuyes , Louise – CNRS / LAAS
Co-Chair(s): Lasserre, Jean Bernard – CNRS / LAAS, Chanthery, Elodie – INSA / LAAS, Jauberthie, Carine – UT3 / LAAS, Pucel, Xavier – ONERA / DTIS
The AC chair project ADDX aims to bridge model-based and data-based methods for anomaly detection (AD) and diagnosis (DX), drawing mutual benefits and closely integrating them in a hybrid AI framework. It gives pride of place to anomaly detection because anomalies, also defined as outliers or out-of-distribution observations, are essential to be detected in data as they can indicate data corruption or faulty behavior. Trust in Artificial Intelligence (AI) systems depends on this because their reliability relies on inputs lying in the training distribution. On the other hand, anomaly detection plays a crucial role in certifying data obtained from sensors or images, as well as in identifying symptoms that can be used to drive diagnosis reasoning and health management. Explicit knowledge extraction from data and learning guided by knowledge applied to the previous problems will be the keystones of the research for this chair. The approaches from shallow and deep learning will be confronted and synergically integrated.
The proposal is organized in a balanced way between “blue sky” research and more applied research that meets socio-economical needs. Collaboration is foreseen with other chairs on the themes of polynomial optimization and robustness as well as with the DEEL project of IRT Saint Exupéry. Industrial partnership is also planned and several industrial companies have already shown interest in this chair, namely Airbus, Continental, Batconnect, Carl-Berger Levrault, Vitesco Technologies and applications in the medical domain are also foreseen with first contacts taken with Hopital Purpan CHU Toulouse.
The chair will be carried by a team of five researchers who bring skills in three complementary fields for the targeted tasks: AI, maths and automatic control. This spectrum of expertise the achievement of hybrid AIresults that will advance the state of the art.
International attractivity chairs
Hybridizing AI and Large-scale Simulations for Engineering Design
Principal Investigator: Kopanicakova, Alena – Universit`a della Svizzera italiana / IRIT
Co-Chair(s): Novello, Paul – IRT Saint-Exupéry, Bauerheim, Michael – ISAE / ISAE
Recently, scientific machine learning (SciML) has expanded the capabilities of traditional numerical approaches, by simplifying computational modeling and providing cost-effective surrogates. How ever, SciML surrogates suffer from absence of the explicit error control, computationally intensive training phase, and the lack of reliability in practice. HAILSED aims to tackle these challenges by (1) developing novel types of SciML surrogates, the architecture of which incorporates physical and geometric constraints explicitly, and whose validation error can be controlled a posteriori, (2) devel oping novel training algorithms, which leverage domain-decomposition-based approaches and utilize model parallelism, (3) hybridizing SciML surrogates with state-of-the-art numerical solution methods, which will be achieved by developing AI-equipped nonlinear field-split and domain-decomposition based preconditioning strategies. Successful realization of the proposed methodologies will pave the way towards efficient and error-controlled solution of large-scale multi-physics and multi-scale prob lems by combining the efficiency of SciML surrogates with the accuracy and reliability of standard numerical approaches. The proposed developments will be carried out in collaboration with academic partners from Universit`a della Svizzera italiana (Switzerland), ISAE Supaero, EPFL (Switzerland), Sandia National Laboratories (USA), and industrial partners from IRT Saint Exupéry.
Hybrid Policy Optimization for Safe and Efficient Robotic Manipulation and Locomotion
Principal Investigator: Righetti, Ludovic – New York University / LAAS
Co-Chair: Mansard, Nicolas – CNRS / LAAS
Robotics has seen tremendous progress in recent years thanks to advances in simulators, optimizers, and reinforcement learning. Despite impressive demonstrations, however, these methods have yet to be fully realized in real-world settings, and scaling beyond the lab still is challenging. What if we could unlock the full potential of constrained optimization and reinforcement learning to train safe, effective policies for dynamic locomotion and manipulation tasks? In HYPOMEL, we propose to achieve this objective by exploiting the results in ANITI 1.0 and of past collaborative research projects. We will first revisit the way reinforcement learning algorithms work by taking advantage of the consolidated expertise gained by years of progress in trajectory optimization, and exploiting recent advances in differentiable simulators. Our aim is to achieve faster, more accurate convergence, reducing the computational burden and providing strict guarantees on the resulting optimal policy. We will then incorporate strategies for planning across hybrid dynamic modes, such as switching between continuous and discrete control, and integrate multi-modal sensory information, including force and tactile sensing. These enhancements will increase the robustness and safety of our policies during physical interactions, enabling effective manipulation and locomotion in complex environments. The experimental capabilities of our team, along with the expertise of Ludovic Righetti in ANITI, will pave the way for a comprehensive and revolutionary framework that hybridizes the advantages of both MPC and RL, ultimately enabling the development of scalable, complex, and reliable robotic behaviors that can be deployed in real-world applications. It will benefit from direct interaction with the synergy chair C3PO (should it be accepted) and other chairs in numerical optimization and machine learning, from effective collaboration with the robotic manufacturer PAL and with the end-user AIRBUS, and consider direct socio-economic impacts with a particular attention on direct dissemination to younger public through dedicated robotics activities.
Trust and Responsibility in Artificial Intelligence
Principal Investigator(s): Bolte, Jérome – TSE GE / TSE-R, Smolin, Alexei – TSE GE / TSE-R, Eynard, Jessica – UT Capitole / IDP, Loubes, Jean-Michel – UT3 / IMT
Co-Chair(s): Mangematin, Céline – UT Capitole / IDP, May, Xiaoyi – UT2J / IMT, Rhodes, Andrew – UT Capitole / ?, Pauwels, Edouard – UT3 / IRIT, Renault, Jérôme – TSE GE / TSE-R
Artificial intelligence (AI) is revolutionizing numerous sectors, leading to significant economic, legal, social, and regulatory consequences. Its transformative potential highlights the importance of addressing both opportunities and challenges. Recently, an open letter by prominent AI experts and industry leaders  advocated that “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” They also emphasized that “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; …
a robust auditing and certification ecosystem; liability for AI-caused harm.” There is indeed an urgent need to develop governing principles for AI development and operations, akin to quality standards for goods and services. However, regulating, auditing, and improving AI systems present unique challenges. First, AI systems are highly complex mathematical processes, making the identification, measurement, and mitigation of risks and biases scientifically and technically demanding tasks. Second, AI
is a multi-purpose technology with societal impacts that extend far beyond those of conventional goods and services. Achieving successful AI regulation and fostering trust in AI systems necessitates interdisciplinary collaboration among AI experts, social scientists, and legislators.
Our interdisciplinary research team combines expertise in law, statistics, optimization, and economics, with the goal of cultivating an AI ecosystem that is efficient, economically viable, and respectful of rights, liberties, and social welfare. By addressing the pressing concerns surrounding AI, we aim to establish the foundation for a sustainable, ethical, and responsible future in AI technology, ultimately transforming the way AI systems are managed and governed.
EXPLainablE and physics-informed Ai for Regional weaTHer prediction
Principal Investigator(s): Risser, Laurent – CNRS / IMT, Trojahn, Cassia – UT2J / IRIT, Raynaud, Laure – Météo-France / CNRM
Co-Chair(s): Lapeyre, Corentin – Cerfacs, Mohanamuraly, Pavanakumar – Cerfacs, Masson, Valery – Météo-France / CNRM, Bovalo, Christophe – BULL SAS / ATOS, Kaminski, Gwenael – UT2J / CLLE
Accurate predictions of future weather conditions are essential for the safety of people and goods, and for the management of a wide range of economic activities. Since the mid-20th century, weather prediction has relied on physical modelling of the atmospheric dynamics, with significant and regular performance improvements. However, the precise prediction of high-impact local phenomena remains difficult and computationally expensive. The rapid progress of Artificial Intelligence (AI) technologies, with early impressive applications to weather forecasting, offers unexpected opportunities for a new generation of weather predictions based on hybridisation of traditional physical models and AI, allowing for increased accuracy and timeliness in a cost-effective way. This chair will focus on some of the most promising avenues to leverage the potential of AI for very high resolution probabilistic weather forecasts, with efforts dedicated to the development and evaluation of both hybrid approaches and purely data-driven forecasts. Outcomes will provide 1/ a deep overview of the strengths and weaknesses of these approaches, with assessment of their possible roles for the future of operational weather forecasting, and 2/ an innovative physics-informed AI prediction system that optimally combines the best of physics and AI worlds. To achieve these ambitious goals, research will be conducted in a holistic framework, supported by a strong and pluri-disciplinary consortium gathering expertise in atmospheric modelling, AI and social sciences. Beyond the methodological and technological breakthroughs that are likely to come up from these works, a particular emphasize will be put on the development of explainable and robust AI solutions, as well as on their transfer and acceptance by the end-users. This is motivated by legal concerns, given that weather forecasting is likely to be considered as a high-risk system in the European Commission AI Act.
Advanced Eddy-resolving Global and Integrated modeling using machine learning for accurate climate predictions
Principal Investigator(s): Zhang, Sixin – INP / IRIT, Renault, Lionel – IRD / LEGOS
Co-Chair(s): Benshila, Rachid – CNRS / LEGOS, Simon, Ehouarn – INP / IRIT
Accurate prediction of Earth’s potential climate trajectories under human pressure at both short-term and long term relies on correctly representing physics, chemistry and biology in global climate models. In the last two decades, two relevant findings emerged in determining the oceanic circulation and air-sea fluxes: the key role potentially played by eddy-scale (O(100) km) oceanic processes and their related air-sea interactions. The continued increase in computational resources has now made it possible to run regional mesoscale coupled models, global ocean submesoscale permitting stand-alone model with a spatial resolution of ~2 km and even global ocean-atmosphere coupled model with a spatial resolution of a few kilometers. By essence, future climate is an uncharted territory, which precludes any direct assessment of the realism of climate scenarios. To overcome this issue, the climate community can rely on paleoclimate observations and reconstructions, and the assumption that if one can reproduce a past climate, one should be able to realistically simulate a future climate. To this end, global coupled simulations must be performed over centennial periods of time (e.g., paleoclimatic periods), which implies a computational cost for high-resolution simulations that may be only possible between 2050 and 2080. In terms of oceanic forecast, high-resolution simulations can be too heavy to run and there is a need to better take into account the ocean-atmosphere coupling.
Are we doomed to use very high-resolution global models or can we rely on new generation parameterizations of fine scale oceanic processes and Ocean-Atmosphere interactions? AEGIR proposes a way forward, by developing new methodologies in machine learning to be applied in Earth Science and by developing parameterizations of oceanic mesoscale processes and air-sea fluxes.
Cobots with Conversation, Cognition and PerceptiOn
Principal Investigator(s): Asher, Nicholas – Emérite / IRIT, Serre, Thomas – Brown University, Stasse, Olivier – CNRS / LAAS, VanRullen, Rufin – CNRS / CERCO
Co-Chair(s): Arnold, Alexandre – Airbus, Boutin, Victor – Brown University, Flayols, Thomas – CNRS / LAAS, Hunter, Julie – LINAGORA, Muller, Philippe – UT3 / IRIT, Mansard, Nicolas – CNRS / LAAS, Pellegrini, Thomas – UT3 / IRIT
Recent AI models have achieved remarkable success in specific domains (e.g. vision, language, robotic agent control), and there is a push towards ever larger models combining multiple input and output modalities. In theory, multimodal representations can help vision scientists by endowing sensory inputs with semantic information; similarly, linguists can use them to ground NLP tokens in the sensorimotor environment and create a form of referential meaning; roboticists can also take advantage of these versatile representations for navigation and action planning. But in practice, current models rely on brute-force training approaches using billions of labelled examples, while the datasets and computing resources available to academic and industrial researchers are typically much smaller. Compared to artificial neural networks, real brains learn much more efficiently; we thus take inspiration from the cognitive science idea of a Global Workspace (GW) to build a novel class of AI systems (PI: VanRullen). The GW, a unique model of multimodal grounding (encompassing perception, action and semantic representations), can promote advances in perceptual models (PI: Serre), and support both top-down interactions (from language and semantics to perception and action) of interest to linguists (PI: Asher), and bottom-up interactions (from active perception and navigation to semantic abstractions) of interest to roboticists (PI: Stasse). The high-risk/high-gain hypothesis is that the modalities complement one another synergistically, such that the whole system is much more efficient than the sum of its parts, not just for multimodal tasks but also when evaluated in the initial domains (vision, NLP, robotics).
Building frugal perceptual and cognitive models that can support language grounding and embodiment and provide semantic representations to robotic agents is expected to have important beneficial consequences for ANITI’s industrial partners (e.g. Airbus, Linagora).
Certified AI for Understanding Intracellular Dynamics
Principal Investigator(s): Cortés, Juan – CNRS / LAAS, Weiss, Pierre – CNRS / IMT
Looking at cells with a standard microscope might give a feeling of a relatively simple structure.
In reality, understanding the sophisticated molecular activity at the basis of life is arguably one of the greatest current challenges in biology. Joint advances in artificial intelligence, microscopy and structural bioinformatics point to significant breakthroughs in the coming years. However, this requires the development of new theories and techniques that form the core of this project.
We want to devise new tools to visualize, model and understand dynamic intracellular processes.
This usually requires reasoning at multiple spatio-temporal scales. Therefore, different experimental techniques and models are necessary to provide complementary information for the overall understanding of complex processes involving molecular and supra-molecular systems. A clear example are processes related to genome organization and regulation implicating intrinsically disordered protein regions (IDRs), which play an essential role in the cell, but which still escape the current technologies.
Imaging such molecular systems is an intricate problem due to physical barriers such as diffraction limit in optics or diffusion in live imaging. Only highly noisy and incomplete information can be gathered from techniques such as cryogenic electron microscopy or super-resolution fluorescence microscopy.
By mixing carefully designed molecular modeling methods, physics informed inverse problem solvers and experimental data, we plan to access information that is yet unavailable in a certified manner.
To achieve our goals, we will be assisted by experts in artificial intelligence, cell and molecular biology, physics and optics. The experimental side of the project will rely on two cutting edge imaging platforms in Toulouse, the METI and the LITC.
This interdisciplinary research project should provide tools for a better understanding of the functional roles of IDRs in genome regulation processes, opening the road to future therapies.
REpresentation Learning for Earth Observation
Principal Investigator(s): Inglada, Jordi – CNES / CESBIO, Dobigeon, nicolas – INP / IRIT, Fauvel, Mathieu – INRAE / CESBIO, Valero, Silvia – UT3 / CESBIO
Co-Chair(s): Oberlin, Thomas – ISAE / ISAE, Michel, Julien – CNES / CNES, Gürol, Selime – CERFACS
Recent Earth Observation (EO) systems have opened up new opportunities for land survey systems that provide critical information for climate change monitoring, mitigation, and adaptation. Monitoring Essential Climate and Biodiversity Variables (EVs) provides key information to understand climate, biodiversity and environmental changes. However, retrieving EVs from multi-source data is challenging due to the singularities of EO data, such as indirect observation of interest variables, varying spatial resolution and irregularly sampled time series.
Deep learning (DL) models offer promising solutions to learn complex patterns from huge amounts of data. However, most of the recent models lack physical consistency and interpretability. Furthermore, they are not able to process data with irregular and unaligned sampling, which is common in multi-modal EO.
Training also requires large amounts of labeled data, which are scarce and noisy in EO. Consequently, current models have a restricted usage in large scale EO systems.
This project will develop new self-supervised representation learning methods to produce semantically meaningful probabilistic representations from high-dimensional multi-modal EO data. The originality lies on the use of prior knowledge from physical models into DL and thus proposing advances in uncertainty estimation and interpretability. The proposed hybrid AI system will blend physical priors and DL to pre-train models that can learn (1) semantically meaningful representations related to EVs and (2) task agnostic generic embeddings (AI-ready data) that can be used by downstream tasks. The system will process multi-modal data to capture complementary spatio-temporal patterns. Physics-guided DL methods will be designed to condition the decoding of generic embeddings to retrieve and forecast EVs and their uncertainties.
To ensure the continuity of land monitoring, the system will use new data assimilation strategies combining satellite observations with pre-trained model forecasts. Continual learning will be used to update the models in response to new EO data. Non-stationary and long-term trends beyond the temporal range of the initial training will be accounted for. The project raises scientific questions regarding joint probabilistic representation learning, incorporation of physical prior information, efficient use of pre-trained models, and continuous model updating with newly acquired data and new on-orbit sensors.
Combining Polynomial Optimization and Machine Learning: Application to Power System Decision Support Tools
Principal Investigator(s): Magron, Victor – CNRS / LAAS, Panciatici, Patrick – RTE, Lasserre, Jean-Bernard – CNRS / LAAS
Co-Chair(s): Henrion, Didier – CNRS / LAAS, Korda, Milan – CNRS / LAAS, Skomra, Mateusz – CNRS / LAAS, Ruiz, Manuel – RTE, Loho, Georg – University of Twente / University of Twente
There is an increasing need for efficient methods to approximate values of secure operating conditions for electrical power systems. Indeed, recent and ongoing changes in the European power network, such as the increase in renewable energy sources interfaced by power electronic devices, are bringing up new challenges in terms of power grid security and large-scale stability assessment. The optimal power flow (OPF) problem aims at determining an optimal steady-state operating point for an alternative-current (AC) electric power system in terms of a given objective function, usually the power generation costs or power losses per time unit, subject to both electrotechnic equality constraints and engineering limits. AC-OPF
problems can be modeled as certain nonlinear optimization problems, involving polynomials in complex variables. We recently proposed convex relaxations that allowed to approximate the optimal values of some large-scale AC-OPF instances with thousands of variables.
The goal of this ambitious collaborative project between the academic partner POP from CNRS LAAS
and the industrial partner RTE is to combine efficient and accurate polynomial optimization techniques with machine learning (ML) tools to solve AC-OPF instances at global optimality, and to provide decision-making tools for transmission system operators.
The first main research direction is to develop frameworks embedding fast convex optimization algo rithms potentially mixing classical interior-point methods and ML/data-driven based schemes.
The second main research direction is to rely on these frameworks to tackle important problems in static and dynamical optimization arising from stability assessment of large power grids and minimization of active power generation with a mixture of continuous and integer variables.
This chair project shall lead us to address both directions by providing fast yet accurate bounds for the underlying optimization problems.
Embeddability and safety assurance of ML-based systems under certification
Principal Investigator(s): Ducoffe, Mélanie – Airbus, Gauffriau, Adrien – Airbus, Pagetti, Claire – ONERA / ONERA
Co-Chair(s): Guiochet, Jérémie – UT3 / LAAS, Delmas, Kevin – ONERA / ONERA, Carle, Thomas – UT3 / IRIT
Embedding of ML model algorithms in safety-critical products, in particular in the aeronautical domain subject to stringent certification, is a significant need of transportation industries. CertifEmbAI aims at bridging some of the gaps brought by certification and safety constraints. The project will focus on the embeddability and verifiability aspects of ML-based systems.
The approach promoted by CertifEmbAI covers the range of compressing / developing tiny ML models that are formally verified towards the system and safety requirements up to their low-level execution on complex hardware accelerators such deep learning accelerators or GPUs while ensuring the semantics preservation of the off-line trained model on the final hardware platform. For that purpose, four challenges have been identified. The first consists in filling the ML training activities with constraints from the hardware targets and the verification needs. The second concerns the proper description of ML models for exchanging them between frameworks and for interfacing them with usual parallel programming paradigms. The third challenge addresses the extensive use of formal methods to verify the correctness of the ML-based system either off-line and at runtime to ensure that no catastrophic situations will be reached due to a poor performance. Finally, the last challenge tackles the predictable and safe deployment of ML models on complex hardware.
The research activities will be supported by four industrial use cases, namely ACAS Xu, vision-based landing for aircraft, vision-based emergency landing for UAV and counter UAV-systems.
Physics-Informed Learning Methods for Continental Waters and Marine Risks
Principal Investigator(s): Roustant, Olivier – INSA / IMT, Monnier, Jérôme – INSA / IMT
Co-Chair(s): Baraille, Rémy – SHOM, Bouclier, Robin – INSA / IMT, Garambois, Pierre-André – INRAE / RECOVER, Garnier, Josselin – Ecole Polytechnique / CMAP, Lüthen, Nora – ETH Zürich, Noble, Pascal – INSA / IMT, Sanchez, Eduardo – IRT Saint-Exupéry
Flooding, whether inland or coastal, is a complex phenomenon that can be described by non-linear differential equations (PDE, ODE), but only partially in some situations. Accurately modeling river flows, from low flows (water shortage) to high flows (flooding), as well as marine submersions is crucial for our societies. Purely physics-based approaches have limitations in their completeness, and in running time of the associated computer codes. Purely data-driven methods are complementary but require huge amounts of data. Hybridizing physics- and data- driven approaches (hybrid AI) has shown dramatic improvements on idealistic contexts. The aim of this project is to investigate and develop hybrid AI algorithms for water related extreme events (floods, inundations, marine submersions) and their associated risk management. Significant advances in model accuracy and explainability are expected. The research will be guided by challenging real-world cases, using our advanced computational codes to solve a hierarchy of physical models, possibly with their adjoint codes. The databases include in-situ historical measurements, and in some cases, satellites data. The research program is divided in five interconnected axes.
Physics-informed learning methods. Hybridization of two famous classes of models, Neural Network and Gaussian Process, with physical knowledge. Investigation of resulting surrogate models and data assimilation processes.
Reduced-basis methods. Model reductions based on hybrid PCA – NN like methods.
Multi-fidelity models. Techniques to account for a hierarchy of computer codes. 1 Applicant’s last name ACRONYM.
Uncertainty quantification. Risk assessment; global sensitivity analysis for model explainability.
Design of experiments. Strategies to create new data from computer codes.
The chair team comprises a panel of researchers and experts from academia and industry, who bring complementary skills in mathematical modeling (PDEs, ODEs, probabilistic), computational sciences and machine learning.
Certifiable Auto-supervised Large Models
Principal Investigator(s): Mamalet, Franck – IRT Saint-Exupéry, Serrurier, Mathieu – UT3 / IRIT
Co-Chair(s): Ducoffe, Mélanie – Airbus, Sengnès, Coralie – INSERM / RESTORE
The project has a high degree of continuity with the great success of the 3IA Aniti and DEEL program: studying 1-Lipschitz neural networks, a class of robust by-design neural networks. The team, built in this first round, was composed of Mathematicians, Computer Science researchers, Data-scientists, and Industry experts, and has published several papers in major conferences and journals. They have laid the groundwork for classification with 1-Lipschitz neural networks, both on theory, on the definition of optimal loss linked to optimal transport, and proof of robustness, certifiability and explainability.
Additionally a full library has been developed, called DEEL-LIP, to learn these kind of neural networks as classical Tensorflow models.
In this project, we propose to further investigate the 1-Lipschitz neural networks in the scope of self supervised learning: the objective is to be able to learn large models with unannotated data in several domains (medical/satellite images, time series, natural language processing) while maintaining the guar antees in terms of robustness, certifiability and explainability. Self-supervised learning is a high trend for classical networks with applications in few-shot learning, semi-supervised learning or as backbones. But, as far as we know, there is no contribution in the literature on self-supervised 1-Lipschitz neural networks.
We propose to tackle the unexplored domain of self-supervised 1-Lipschitz large models with three research axis: In the first axis, we will explore methods for self-supervised learning using optimal transport loss, to learn from unannotated data while still promoting the robustness of the neural network. We will also investigate more recent and deeper 1-Lipschitz architectures, such as transformer, to enhance the learning capabilities of these networks on very large datasets and their generalization. To end with, we will work to establish the theory and certifiable guarantees for these self-supervised learnt 1-Lipschitz Neural Networks. For industrial safety-critical applications, we will develop a set of pre-trained 1-Lipschitz Networks for various domains, including satellite images, time series, language processing, and medical imaging, where data quantity and annotation are crucial.
Hybridizing lEarning, seaRch and combinatorial Optimization for Industrial deCision-making
Principal Investigator(s): Guillaume, Romain – UT2J / IRIT, Techteil-Koenigsbuch, Florent – Airbus, Gerchinovitz, Sébastien – IRT Saint-Exupéry, Thiebaux, Sylvie – UT / LAAS
Co-Chair(s): Artigues, Christian – CNRS / LAAS, Cesari, Tommaso – University of Ottawa, Fargier, Hélène – CNRS / IRIT, Poveda, Guillaume – Airbus, Roussel, Stéphanie – ONERA / ONERA
Decision-making problems are ubiquitous in industry, from production optimization to in-service product operation management, including internal project and resource optimization. In the last decades, many data-driven (e.g. Deep Reinforcement Learning) and model-based (e.g. AI Planning, Constraint Programming) approaches have been separately investigated to solve those problems. While the former often need a huge amount of data which is scarcely available in many industrial problems, the latter are not always suitable for problems that are complex to model with accuracy. Also, both approaches fail to solve large problems in reasonable time and computational resources, especially in presence of uncertainty that significantly augments the combinatorial explosion of the solution space. This chair will investigate tight hybridization techniques between data-driven and model-based approaches to decision-making by targeting three main objectives: scalability, robustness, industrial use case representativeness. It strives for opening the door to optimized, reactive and robust decision-making in large and complex industrial problem scenarios, while significantly lowering the computation and data cost of current solvers used in the industry. The chair will gather together academic researchers from diverse institutions (LAAS, IRIT, IRT, ONERA, Ottawa University) and scientific fields, including combinatorial optimization, search, and machine learning.
Industrial partners from the aerospace and automotive industries (Airbus, Liebherr, Vitesco) will second engineers and provide challenging use cases where hybrid methods are expected to reduce operational costs due to uncertainty, model inaccuracy or solution suboptimality. This research will also benefit the health sector where we will investigate with Oncopole how to optimally schedule radiotherapy treatments for cancer patients under uncertain medical pathway appointments, with a view to improving remission chances.