The scientific challenges dealt with in the research programs, resulting from the chair projects and use cases provided by ANITI's industrial partners, are broken down into twelve themes.
Certain themes are dealt with jointly within several research programs.
Learning with few or complex datas
Learning with little data is a necessity in many fields in which high quality annotations made by human experts are needed to learn complex tasks (e.g., extraction of deep semantic and pragmatic information from text or from satellite imagery) but are very costly or simply not available.
This theme explores various approaches to low resource learning, in particular different learning architectures inspired by "natural" or "biological" learning mechanisms. Humans and other animals can learn complex information from sparse and often noisy data, and so it behooves us to examine such natural learning architectures if we wish to make progress in this area. Another approach we consider in this theme is to build generative models from very small datasets for creating large annotated data sets. Yet another topic of study in this theme defines compact methods for the representation of highdimensional data, suitable for deep learning, that respect physical, mathematical or other structural constraints that we know must govern the data.
The ambition here is to develop a joint framework enabling to harmonize multi source, multimodal, multiscale and temporally dynamic data into a coherent semantic representation and to efficiently learn from and analyse such data by, for example, using approximate Bayesian computation. Applications are envisaged in the fields of remote sensing, transportation and healthcare.
Thread 1: Low resource learning
Thread 2: Representation learning
Thread 3: Multisource, Multiscale data
Chaires : Nicolas Dobigeon, Fabrice Gamboa, JeanBernard Lasserre, Thomas Serre, Thomas Schiex, Rufin van Rullen
Certifiable AI — Safe design and embeddability
This theme addresses software/hardware architecture, verification and systemlevel assurance of certifiable AI systems. It draws on results from the themes of learning, optimisation, as well as AI and physical models.

The first topic addresses the design of certifiable embeddable AI architectures. The goal is to design physical/software protection mechanisms and constraints around ML or deep learning algorithms that will yield results consistent with certification requirements. The contributions will cover several types of requirements: mitigating offline floating point errors, ensuring the capacity to assess a Worst Case Execution Time (WCET) and detecting at runtime hardware failures. The study on the analysis of error accumulation inherent to Deep ML models execution due to hardware or implementation choices is aimed to fine tune precision while minimizing CPU, memory and energy consumption for better embeddability and portability. Another goal of this thread is to design new processor architectures and safety assessment methodology for the next generation of chips to be used for AI critical applications (visionbased perception, automotive and autonomous driving or radar/lidar computing). Those chips will integrate neural network hardware IP and should ensure the same level of safety as expected by the ISO 26262.
The second topic concerns challenges concerning ML models selection and verification. The objective is to offer a high level of safety for machine learning based systems. The approach consists in investigating several verification means of the trained model to check its compliance with the system requirements. The contributions will cover any argument that provides assurance on the model. The first approach concerns the formal verification of the trained model and will incorporate the development of new verification tools, the definition of verification strategies and the combination of simulation and formal verification when the verification only is insufficient. The second approach consists in defining alternative techniques when formal verification cannot be applied. An example of such an alternative is the definition of good learning practices that will provide some confidence in the model. The challenges to be tackled include the lack of formal specifications, lack of a formal definition of the properties to be verified, and the large size of systems.
The third topic is system level assurance. The objective is to offer guarantees and assurance at the system level, the system including some ML based components. At the system level, it is mandatory to analyse whether the overall system fulfills the safety objectives. In particular, the safety assessment consists in elaborating dysfunctional model assumptions and evaluating the qualitative (cut sets on failure scenarios) and quantitative (e.g. reliability) attributes of the system. This is a specific work of the DEEL project mission on certification. A second activity will focus on Runtime verification based on the design and implementation of an independent safety monitoring system embedded with the ML functions to observe their behavior at runtime and detect whether a dangerous situation occurs. Such a solution is classical in safety critical systems. The novelty comes from the specification of the monitor as there does not exist a formal specification of the main system to be observed. The objective is to implement fault tolerance mechanisms based on runtime monitoring to detect potential safety impact of slight modifications of the inputs (e.g. adversaries generation). An application to an UAV emergency landing system will be considered as a use case.
.
.
.
.
Thread 1: Embeddable AI architecture
Thread 2: AI Model selection and verification
Thread 3: AI System level assurance
Chaires : Serge Gratton, Joao Silva Marques, Claire Pagetti
Datas and anomalies
This theme aims to define advanced techniques based on hybrid AI for the detection, diagnosis and prognosis of anomalies or extreme events such as floods.
One topic of this theme is to define methods based on hybrid AI for detecting and diagnosing anomalies that are generic, explainable, certifiable and adaptive to evolving environments. The first objective is to highlight and understand how modelbased and data driven approaches can complement each other. The second objective is to be able to abstract data classifiers and map them to symbolic or analytical models suitable for diagnosis reasoning, for better explainability and acceptability.
The ambition is to define a general method for detecting anomalies that is: (a) not too dependent on a particular type of data, (b) takes into account rules coming from physics or logical constraints, (c) enables us to distinguish between unbalanced sampling, noise, disturbances, and real anomaly, (d) is able to adapt to evolving environments and account for dynamics, (e) is transferable to a twin system with automatic retuning, and (f) enables enables diagnosis, i.e.identification of rootcauses of anomalies).
Different use cases will be used to validate the proposed techniques covering e.g., predictive maintenance and diagnosis of electronic boards, cobots or industrial systems represented by digital twins, detection of floods by using numerical simulation enhanced using machine learning techniques.
Thread 1: Semantics from noisy linguistic data
Thread 2: Language and multimodality Thread 3: Dialog/conversation modeling
Chaires : JeanMichel Loubès, Serge Gratton, JeanBernard Lasserre, Louise TravéMassuyes
Explainability
The explainability or the interpretability of machine learning algorithms has become an important issue in the foundations of AI. It is crucial to making sophisticated AI systems acceptable to the general public and to their integration in safety critical systems that will be part e.g. of future intelligent mobility.
This theme includes three threads, one featuring a logical approach to explainability, the second a statistical approach, and the third one addresses hybridization approaches that can leverage the best of both techniques.
Logical methods are based on a representation of a learning algorithm by a set of firstorder formulas, from which one can study various notions of explanation that scientists, philosophers and logicians have developed within a general logical framework. These explanations are useful for analyzing robustness and detecting biases. The main challenge is to scale up these methods. One of our aims is to formalize existing types of explanation in a unified setting. Statistical techniques like Entropic Variable Boosting consist in disrupting inputs and measuring the effect on classifications, thus indicating key learning parameters. A statistical theory of deformations like optimal transport can generate counterfactual explanations, along with the use of measures for the counterfactual relations that range from simple L1 measures to Wasserstein distances that preserve probability distributions. Another approach exploiting unsupervised learning for representation disentanglement is another way to provide interpretability. These methods can be efficient but do not formally guarantee the results.
The next step in this theme is to build the hybrid combinations by putting logic into the statistics and vice versa with a view to improving both the rigor of the explanation and the efficiency of obtaining this explanation. We will provide translations between logical frameworks and more mathematical ones employing notions from geometry and topology to provide formally guaranteed and efficient methods for interpreting and explaining ML behavior. We will validate these techniques on industrial data in image recognition for example for autonomous vehicles or in language.
Thread 1: Explainability with logical methods
Thread 2: Explainability with statistical methods
Thread 3: Hybrid XAI : combining statistical and logical methods
Chaires : Leila Amgoud, Joao Silva Marques, JeanMichel Loubès, Thomas Schiex, Louise TravéMassuyes
Fair learning
This theme develops new methods to detect and then eliminate or control undesirable biases in learning algorithms and training and testing datasets. In many applications, biases can arise from an unbalanced representation of operating and environmental conditions, mislabelling or incomplete data descriptions leading to erroneous correlations. Learning models must be able to accomodate and deal with such biases.
In this theme we also examine how these methods meet or challenge legal and ethical requirements for AI systems. This theme aims to:
1) Provide formal and legal definitions of biases that can lead to traceable and feasible controls on algorithms and large volumes of data they may exploit
2) Understand the nature and epistemological consequences of biases (effects of sample distribution given during training and generalization error, learnability effects, sampling of the dataset, legal or technical constraints)
3) Understand the effect of meeting "Fairness conditions" on AI system performance
4) Investigate the interactions between bias and explainability via counterfactual and logical methods.
We will apply methods for discovering and controlling biases to critical systems that must meet certification requirements. Different use cases, for example in the field of transportation or health, will be considered to validate our work.
.
.
Thread 1: Analysis of bias for fairness
Thread 2: Analysis of Bias for Data and algorithms in critical system design certification
Chaires : Leila Amgoud, Celine CastetRenard, Fabrice Gamboa, JeanMichel Loubès, Claire Pagetti, JoaoMarques Silva.
AI and physical models
Analytical models for analyzing complex processes (physical, chemical, biological, …) like the planet’s weather, known as physical models, are often incomplete or intractable. Thus complementing and even replacing such models with datadriven ones using machine learning has become a popular approach in many fields of science and engineering. ML techniques have been shown to be able to solve complex classification or regression tasks very efficiently for data such as images, text, audio signals. This theme investigates how ML techniques can be extended to solve problems, involving Physics represented by raw data or by partial differential equations. It features a hybrid approach to AI mixing physical constraints with machine learning methods, energy models, etc.
The first topic in this theme concerns new techniques for accelerating physical models simulation and optimization via statistical and geometric ML approaches. The objective is to explore how physical information can be used to train ML algorithms, the performance that can be achieved in terms of computational effort and accuracy as compared to standard methods from mathematical physics, and how to replace (totally, partially) numerical simulations with ML. This topic is multidisciplinary, combining physical modelling and applied mathematics (data assimilation, optimization, complex simulation, statistical learning, uncertainty quantification, sensitivity analysis). We will also design new tools for evaluating these approaches. Standard evaluation tools derived for traditional models are insufficient to assess the generalization capacity and the overall bounds of datadriven ones. New tools are needed to quantify the uncertainty of these models and to evaluate how much each input contributes to the output uncertainty (sensitivity analysis). Several approaches are explored to construct hybrid physical model simulations. One very promising way consists in penalizing by some cost function that encodes information about physical features of the system. This means that an extra regularizing term is added in the fit criterion when one identifies the machine learning model. This term ensures that the statistical model provides approximately similar predictions as the physical model would. Another relevant research track is to design machine learning models that incorporate some smart geometric representations that take into account the physical nature of the data. One of the main aims of this thread is to design and understand HPC based numerical methods for solving complex problems. Boundary conditions on complex problems will lead to nonconvex optimization problems with constraints for which novel computational algorithms are needed.
In a second topic we will explore how to improve machine learning models with physical constraints focusing in particular on data assimilation (DA) techniques that we wish to systematise and implement in a clever way. DA algorithms combine the knowledge of the temporal dynamics of a parameter dependent physical system, together with indirect observations of it, to provide an estimation of its state at later times. When the size of the state vector is large, and computation cost is a concern, state of the art algorithms rely on truncation backward in time of the estimation, with which approximate a priori statistics are introduced, implying a loss of the optimal properties of the system. Additionally these algorithms are principally based on iterating techniques originally developed for linear dynamics and observation operators, making them inadequate when highly nonlinear systems are concerned. Variational ML algorithms and recurrent networks provide a natural framework for formalizing questions of truncation and nonlinearity, providing an opportunity to develop attractive algorithms in terms of accuracy, and hopefully computational cost.
Industrial applications of this work are diverse, including for example the simulation of aerodynamic flows, flood prediction, optimization of electronic component testing, etc.
Thread 1: Accelerating physical models simulation and optimization with ML, using statistical and geometric, approaches
Thread 2: Improving learning models with physical constraints
Chaires : Serge Gratton, Fabrice Gamboa, Thomas Schiex, Jérôme Bolte, Nicolas Dobigeon
AI & society
This theme addresses the challenges related to the socioeconomic, legal or ethical acceptability of AI applications. It attempts to address questions like: "how does AI affect economic competitivity, economic platforms? "how can we use the power of AI to get the general public more involved in social/governmental decisions?", "how does the general public perceive AI, autonomous AI systems and how can we make those more acceptable?"
This theme also investigates legal and ethical issues surrounding contemporary AI developments. For example, how might the general public evaluate risk in conjunction with AI systems? How should governments legislate and how should legal systems encode values of privacy and accountability in AI systems — e.g., who is responsible if there is an accident with a selfdriving car?
Thread 1: Social acceptability and applications of AI
Thread 2: Responsibility: Legal and Ethical Issues
Chaires : JeanFrançois Bonnefon, Bruno Jullien, César A. Hidalgo, Céline Castets Renard
Language
Language is central to much of collaborative AI. Linguistic interaction is important for socially and cognitively interactive robots. Topics in this theme are also relevant to work on learning, to Neurosciences and AI, and possibly also to robustness.
The first topic of this theme is advancing the state of the art in the extraction of deep semantic information from texts in semiclosed domains(i.e., that are about a particular subject matter like maintenance or production logs). Currently, we are working with an Airbus dataset on NOTAMS, short texts that provide pilots with information about airport features and potential hazards. However, we envision applying this technique in a number of areas including extracting information from maintenance logs or from conversations between humans and AI assistants. We will exploit not only lexical semantics but also semantics that is encoded in semantic relations between clauses or discourse structure.
The latter is important for capturing, for instance, reasons why an operation was or was not performed, exceptions to requirements, elaborations or more detailed descriptions on proposals or operations performed, but also opinions and comments. Usually in this context, we have little to no supervised or annotated data; so we will use low resource learning combined with deep learning techniques in hybrid architectures.
The second topic is how nonlinguistic information in a conversational context integrates with linguistic information to convey a message with fuller semantic content. Our main hypothesis is that this integration happens on many levels; conversational participants may make use of nonlinguistic events as well as linguistic speech acts to convey content at the lexical, clause or discourse levels in a variety of ways. This makes multimodal interaction and dialog modeling two interacting threads. At the lexical and clausal level multimodal interactions can semantically ground visual information, which has important implications for computational theories of vision and computer vision pursued in the Neurosciences and AI theme, as well as more applied topics like visual question answering. This work also serves to advance multimodal representation learning. There is evidence that such multimodal encoding of information will generally improve robustness of predictions with such encodings, and thus this work is also of interest to research on robustness in the Optimization theme. Understanding the interaction of multimodal information sources in conversation will also be crucial to advancing dialogue conversation modeling and for improving the performance of robot/cobot and conversational assistants that have access to visual and linguistic data. The current state of the art does not exploit multimodal information or only in very primitive ways, which follows from the fact, if we are right, that only a very few discourse uses of nonlinguistic elements with respect to language have been explored.
A third topic devoted to dialog/conversation modeling aims to produce realistic conversational systems that are embedded in a physical environment. This will enable implemented systems like robots/cobots and embedded conversational assistants to have capacities that are lacking in current implementationsfor instance, the ability to learn new complex actions both from linguistic descriptions and visual demonstrations and to perform those actions. To train ML systems for advanced conversation, we have looked for corpora Multimodal corpora exist, in particular the STAC corpus, which has nourished of Toulouse work on multimodal conversation; but we have not found a corpus with multimodal interactions. Thus, we will be resorting to distant supervision techniques in this thread as well.
Thread 1: Semantics from noisy linguistic data
Thread 2: Language and multimodality
Thread 3: Dialog/conversation modeling
Chaires : Rachid Alami, Leila Amgoud, Thomas Serre, Rufin VanRullen
Nerusocience & AI
This theme deals with the natural intelligence component of ANITI and aims to crossfertilize neuroscience and artificial intelligence techniques. We have two main goals: 1) to use stateoftheart AI methods (eg. GAN) to effectively address fundamental questions in neuroscience; and 2) to use knowledge about the brain's modes of functioning to design improved AI algorithms and artificial neural network architectures, in particular for reinforcement learning.
A first topic in this theme focuses on the use of GAN and deep learning tools to find patterns in vast amounts of fMRI and EEG data, and to relate them to stimuli, cognition and behavior. We will compare the representations learned by the deep nets to the human ones. To facilitate these projects, we will develop new tools to annotate and store the data in a standardized fashion. This includes the implementation of online signal processing, features extraction and machine learning algorithms that address transfer learning challenge and are robust to noise (i.e. out of the lab), to the work context and the human operator, and which would enable to monitor mental states relevant in risky settings such as workload, fatigue, stress, and error detection. We will especially focus on the design of Neuroadaptive technology based on passive Brain Computer Interfaces dedicated to sense human performance and to measure multiple users brains while interacting with each other and artificial agents. The physiological data of interest would mainly consist of cardiac and oculomotor measures recorded using ECG and eyetracking devices, but also cerebral measures recorded by electroencephalography (EEG). Another related topic is the direct comparison of human brain representations (via EEG or fMRI) and deep neural network representations on the same datasets, for both vision and language. In particular, we will explore the use of generative models (GANs) to improve visual image reconstruction from fMRI brain signals.
A second topic addresses the development of reverse reinforcement learning techniques to understand and learn about brain processes that promote attention dynamics, using EEG and fMRI brain data, and then applying this knowledge to improve AI tools, in particular artificial neural networks (ANN) and reinforcement learning (RL). The comparison will help us to understand how the brain works and to identify which neural network models are more (or less) compatible with brain or human behaviorderived representations, and possibly design new ways to improve this compatibility (e.g. using human brain activity as a regulator during neural network training) which should improve their robustness.
Thread 1: AI for decoding brain activation
Thread 2: Reverse engineer the brain to improve AI algorithms
Chaires : Rachid Alami, Frédéric Dehais, Nicolas Mansard, Thomas Serre, Rufin VanRullen
Optimization & game theory for AI
This theme develops the theory of and algorithms for optimization for ML/AI. We are interested in particular in how results in optimization and machine learning structurally depend on aspects of mathematical tools used or assumed in optimization and ML like calculus, geometry and algebra. We also study how some of these tools are implemented—for instance, automatic differentiation. In this theme we also address questions about strategies and information and we investigate various optimization methods (First Order Methods (FOM), Second Order Methods (SOM), Sum of Squares Optimization (SOS) and study their convergence, stabilisation and robustness properties. One topic of this theme concerns optimisation methods for large scale problems in ML, and proofs of convergence for first order optimisation techniques in various types of problems. We are interested in structural features of ML problems and how they relate to optimization. We are also interested in various optimization strategies that we can think of as being arranged in a hierarchy. Lastly, we are interested in Higherorder optimization methods (e.g. Newton's method) in order to develop a multilevel strategy with a hierarchy of approximating problems of decreasing dimension. We will also be looking at worstcase analyses in which we build worstcase scenarios to understand the limit of a given optimization strategy. A final topic for this thread is surrogate inputoutput models for complex problems. Neural networks can be used as surrogate models for complex functions whose evaluation requires heavy, time and energy consuming simulations. Our goal is to construct a data set and a neural network architecture that guarantees the faithfulness of the surrogate model to the function it is supposed to model. A second topic is optimization theory for Robust AI. The goal is to define new methods, conditions and tools that provide more, and better, guarantees on the behavior of neural networks, while at the same time securing generalization capacitythe capacity to detect unknown observations, and to be able to take into account these new observations to update and increase capacity of the algorithms. We have four ways to investigate robustness. A first approach focuses on robustness and sensitivity of deep learning algorithms by providing numerical certificates for formal proofs (via certificates of positivity). A second approach will consist in studying robustness via worst case analyses or counterfactual models. A third approach will focus on developing trust region algorithms with dynamic accuracy on function and gradient for very highperformance multiprecision computation platforms. Finally, we will also study robustness using statistical tools, e.g., via Wasserstein distances with the goal to devise robust classification algorithms by taking advantage of the fine sensitivity of these distances for measuring differences between distributions/images. Another challenge is to elaborate necessary/sufficient stability conditions independently of the optimization algorithms and procedures (choice of the algorithm, initialization, batches). A last set of challenges addresses the link between robustness and low resource learning. The goal is to devise novel techniques enabling the optimization and analysis of neural networks with quantized weights and low energy consumption. A third topic is to examine possible applications of game theory concepts and methods to AI and study strategic behavior in complex (AIrelated) environments. The topics addressed include learning and bandits (Noregret learning, strategic experimentation with observed actions and private rewards, bilateral trade with bandits) and also Generalized Adversarial Networks (GANs), for which we hope to provide more stable GAN training techniques and new tractable algorithms with proofs of convergence/finite time error bounds using solution concepts from game theory.
Thread 1: Optimization theory for AI
Thread 2: Robustness
Thread 3: Game theory and AI
Chaires : Jerôme Bolte, Serge Gratton, JeanMichel Loubes, JeanBernard Lasserre, Jérôme Renault, Marc Teboulle
Automated reasoning and decision making
Machine Learning often reduces to continuous numerical optimization problems, yielding as an output a learned model defined by a large collection of numerical parameters. Logical reasoning in its most fundamental propositional form, reduces instead to discrete constraint satisfaction problems. Algorithms that can provably solve problems with numerical parameters, discrete variables and logical properties (or constraints) are therefore of prime importance for AI, allowing to impose constraints on the output of learned models. The main challenge here is that automated logical reasoning is computationally hard (decision NPcomplete) and cannot be approximated. Therefore scalability is often a major focus.
The first topic of this theme explores different complementary techniques to reduce complexity and improve scalability. Toulouse is well known for (weighted) CP/SAT/MIP/Graphical model solving, and for work in automated reasoning, which the group is using to provide automated reasoning for and with machine learning. Automated reasoning can offer optimal decoding, exact loss optimization, as well as enforcing of logical constraints on decoded output. Using reasoning about inconsistency and optimizing with NP oracles, exploiting efficient computation of prime implicants, automated reasoning can also explain ML models or provide proofs that the learned model satisfies fairness or robustness properties. On the other hand, machine learning can be used to learn discrete optimization interpretable problems from data, or to guide NPcomplete problem solvers in the exponentially sized discrete spaces they explore. The major challenge here remains the extreme computation complexity of the problems considered, which are often NPcomplete or worse. Specifically, the introduction of preferences, uncertainty and adversarial reasoning typically requires to solve problems beyond NP, including #Pcomplete and quantified problems up in the Polynomial hierarchy. A major objective therefore relies in the development of more effective reasoning techniques, including tighter bounds based eg. on convex/SDP relaxations, or by learning implied constraints during solving as well as dedicated algorithms for tackling problems beyond NP such as adversarial (bilevel) quantified problems and counting problems. In some cases, it is possible to “compile” the original description of the problem in such a way that online requests can be handled in polynomial time w.r.t. the compiled form (no free lunch: in the worst case, the size of the compiled form can be exponential with respect to the size of the original description). Nevertheless, for several real world cases, the compiled form is tractable and sometimes more compact than the original form. We shall thus develop compilation models and algorithms for the online optimization of problems dealing with preferences and/or uncertainties, be the uncertainty/preference model quantitative (e.g. GAI nets, Bayesian nets, Markov Random Fields, Scheduling and planning problems) or qualitative (e.g. CP nets, logical approaches, point and interval algebra). The main research lines here are both theoretical (charting compilation maps for new domains, temporal reasoning, preference languages, learning) and applicationmotivated, trying to develop approximate compilation principles when the compiled form explodes as well as using knowledge compilation techniques for boosting incomplete search algorithms. Automated reasoning algorithms also require a model that describes the problem to be solved. If in many cases a majority of the knowledge that describes the real problem to solve follows from physical laws and can be expressed as logical constraints on space or time for example in scheduling or timetabling, there is often a variable fraction of information that can only be extracted from data. One of our target will therefore also to be able to learn discrete optimization problems from data. Convex/SDP relaxations of discrete optimization problems such as Max2SAT offer an interesting approach as their parameters can often be learned by convex optimization provided a suitable loss function is used. Finally, in practice, solving NPhard problems often requires to identify certificates in exponentially sized spaces. While the technology of guiding heuristics for automated reasoning has made important empirical progress in the last decades, using socalled “adaptive heuristics'' that learn how to solve a problem while trying to solve it, the use of more sophisticated ML approaches such as multiarmed bandit or combination of Reinforcement Learning and Deep Learning offers opportunities to learn applicationoriented heuristics, learned in a specific domain.
The second topic will deal with different applications of automated reasoning techniques to complex systems, for example for designing proteins using both machine learning and automated reasoning approaches, but also to support product configuration, Offline and online decision making (e.g., reactive and proactive scheduling in assembly manufacturing) or for modeling and finding preferred diagnosis strategies. Two particular topics of interest in this thread are bilevel (minmax) discrete optimization for adversarial protein design and joint diagnosability maximisation and subsystem connection minimization via system decomposition and test selection
Thread 1: Algorithms and complexity
Thread 2: Applications to complex systems
Chaires : Leila Amgoud, Daniel Delahaye, Hélène Fargier, Joao Silva Marques, Thomas Schiex, Louise TravéMassuyes
Robotic & AI
This theme brings the integrative challenge of AI to full realization: it includes challenges from computer vision, optimal motor control and planning needed to provide autonomous functional and decisional abilities needed to interact with a physical environment and in presence of uncertainty, but it also brings in capabilities for reasoning about the cognitive states of others and language skills in robot human interaction and human robot collaboration where humans and robots work together on a common task and share a physical space.
One topic in this theme addresses fundamental aspects of motion generation and motion understanding. These aspects link robotics to research in optimal control, reinforcement learning and constrained optimization. Our research will also focus on developing efficient hybrid methods for planning and controlling the motion of complex robots, i.e. mobile robots with arms and/or legs. One of the core advances that is expected from advanced AI methods for robotics is a unified integration of what is typically decomposed in sensing/deciding/planning/controlling into a single decisional unit, able to take complex (task related) decisions without explicit decomposition from raw sensor data. A first milestone in this direction is to enable realtime model predictive control without simplified models, able to control the full dynamics of a complex robot while avoiding obstacles and respecting other constraints. This involves developing fast and robust numerical solvers, able to solve 1,000variable constrained problems in milliseconds. Another challenge is related to the elaboration and execution of highly constrained HumanAware and TaskAware robot motion plans. Typical examples are robot motions in the presence of and in synergy with humans. The difficulty here is to take explicitly into account, in the motion and physical action synthesis process, an estimation of humans state and intentions as well as their preferences and social norms in terms of predictability, legibility and acceptability of robot trajectories. Another class of challenging problems is the consideration in an integrated manner the intricacies of combined task and motion problems. A second topic focuses on cognitive abilities and communication. If robots are to become truly interactive coworkers with humans or at home helpers, they must be able to acquire and convey information to humans in natural conversation, and they must have and be able to update a theory of mind for the humans they interact with so as to be able to predict their partner's wishes or fears, as well as limitations on their own performance or that of their human partners. More specifically, this thread aims to study how we can model cognitive abilities to be used by a robot and how monitoring tools used for human brain study could benefit robotics. Significant advances need also to be achieved in natural language processing, knowledge representation and communication for HRI. The aim is to investigate and ameliorate the models and processes that are necessary for effectively integrating humanmachine communication (including nonlinguistic actions like gestures as well as conversational or dialogue moves) with the decisional processes involved in humanrobot joint task achievement. A third topic focuses on the planning and decisional processes that are necessary for a cognitive robot to conduct a collaborative activity with humans and other robots. Several key challenges are addressed: planning in presence of uncertainty, (humanaware) task planning, decisional processes for collaborative task achievement, mixedinitiative, integration of learning and planning, automated decisionmaking, hierarchical and interactive planning framework, plan explainability. A second major goal is to improve control architectures and tools for building autonomous robot systems. This includes research on the development of a rigorous design approach and associated tools for building software architectures for autonomous robots (to specify functional components for autonomous systems and to automatically synthesize formal models of these components). These models can then be used to verify offline properties of the system, but also monitor online the proper execution of the system and enforce temporal and logical constraints. We will also design generic multilevel control architectures integrating planning and timebounded decision modules for autonomous cognitive and interactive robots that achieve tasks in the presence of and in collaboration with humans. We will make use of many AI techniques, from visual processing to symbolic reasoning, from task planning to theory of mind building, from reactive control to action recognition and learning. We will combine these abilities in a coherent deliberative architecture. A fourth topic addresses social and societal aspects of HRI. The social context of a humanrobot interaction consists of the multiplicity of roles, norms, conventions, and social practices that we, as humans, explicitly or implicitly define to handle our daily lives together. Even though roboticists have acknowledged this complexity from the very beginning of social robotics, we still lack a general theoretical framework for describing a social interaction context, and stating general and specific requirements. This thread aims to consider this in several directions: first, by studying which social models could be interesting and how they can be translated into a robotic architecture ; second, by studying which impact robots could have into our lives and what we need to care about.
Thread 1: Motion planning and control
Thread 2: Cognitive abilities and communication
Thread 3: Architecture, decision and interaction
Axe 4 : Aspects sociaux et sociétaux des interactions HumainRobot
Chaires : Rachid Alami, Céline CastetsRenard, Frédéric Dehais, Nicolas Mansard, Claire Pagetti, Thomas Serre, Rufin VanRullen