#### Comment candidater ?

Envoyez votre CV détaillé, une lettre de motivation et une copie de vos diplômes à aniti-phd@univ-toulouse.fr.

Des exemples de vos publications scientifiques et des lettres de recommandation seront un plus.

**Les offres de thèses**

#### Competitions policy and regulation in digital markets, in the era of artificial intelligence

A**dvisors** : Bruno Jullien (Université Toulouse capitole, TSE)**Contacts **: br.jullien@gmail.com **Net Salary**: 2 096€ per month with some teaching (64 hours per year on average)**Duration**: 36 months

**CONTEXT **

Today, digital change and artificial intelligence are major issues for competition policy. Digital platforms, the intensive use of algorithms and artificial intelligence call for an evolution of competition policy. New anti-competitive practices may result from the use of algorithms, data mining and network effects to determine competitive prices. The thesis will aim to understand some of these practices through a theoretical and empirical micro-economic study of the impact of artificial intelligence on a digital market.

**Description **

The objective of the thesis will be to contribute to the adaptation of competition policy to digital change and artificial intelligence. Digital platforms, the intensive use of algorithms and artificial intelligence are as many challenges for antitrust. New anti-competitiveness practices may result from the use of algorithms, data exploitation and network effects. Big Data induces new forms of discrimination. The thesis will aim to understand some of these practices through a theoretical and empirical micro-economic study of the impact of artificial intelligence on a digital market or practices affecting the development of AI (mergers and acquisitions for example). The method used will be twofold: on the one hand the development of theoretical models solved by means of game theory, as practiced by modern industrial economy, and on the other hand the empirical estimation of structural models by means of recent econometric techniques.

The candidate should have an excellent level in theoretical and empirical industrial economics, microeconomics, game theory and econometrics.

This thesis project will be supervised by Bruno Jullien, DR CNRS at Toulouse School of Economics (TSE), and holder of the ANITI chair « Artificial intelligence and competition on markets ».

The thesis will take place at TSE-R in the TSE Doctoral School. The subject is directly related to the themes of TSE which has acquired an international reputation in the field.

#### Optimization of the structure and activation functions of deep neural networks for improved trajectory prediction of airliners

A**dvisors** : Daniel Delahaye, Nicolas Couellan (ENAC), François Soumis (IVADO)**Contacts **: Delahaye@recherche.enac.fr , niolos.couellan@recherche.enac.fr**Net Salary**: 2 096€ per month with some teaching (64 hours per year on average)**Duration**: 36 months

**CONTEXT**

Neural networks are currently able to rival or even surpass humans in games and control problems. However, these networks are often built by replicating the same structure on a set of fairly homogeneous layers. The learning of these networks is then done by optimizing the values of the neuronal weights without taking into account the structure of the network as well as the activation functions in the optimization process. The objective of this research is to use artificial evolution principles to jointly optimize the network structure and the associated activation functions in order to improve the global learning process. We propose to apply this methodology to the problem of predicting the trajectories of civil transport aircraft.

**Description **

In recent years, artificial neural networks (ANN [1]) have demonstrated impressive learning capabilities for a wide variety of tasks. These methods have recently been applied to reinforcement learning (RL), where ANNs have been trained to play video and board games, achieving human or higher levels of performance [2]. However, a number of limitations to deep learning remain. In particular, only neuron weights are optimized by optimization algorithms (gradient retro-propagation) but few initiatives explore the potential benefits of taking into account other network attributes in the optimization process.

At the same time, optimization methods have undergone very interesting developments, especially in the framework of meta-heuristics [3] which allow to address problems of great complexity in many fields of applications. This success is also linked to new developments in computers which have seen their power increase dramatically (GPU).

In this study, we propose to use artificial evolution principles [4] to optimize the structure and activation functions of neural networks in order to improve their learning and adaptation performances.

In order to validate this approach we propose to apply this methodology to a difficult problem from the field of air traffic management: trajectory prediction.

The aim of trajectory prediction is to estimate the future position of aircraft along their predicted trajectories in order to detect potential conflicts and optimize airspace occupancy. This prediction is an essential task of the air traffic control (ATC) process and has been studied for many years [5]. For future automation processes developed in the SESAR (Europe), NextGen (USA) and CARATS (Japan) projects, such trajectory prediction will be even more critical. In these projects, trajectory predictors generate aircraft prediction trajectories, usually for customer applications. Since there is always a discrepancy between the predicted wind/temperature (from the weather forecast) and the encountered temperature/wind, the main source of longitudinal (along-track) error between the predicted and actual trajectory is related to the weather estimate. Other factors also influence such a prediction such as aircraft weight, aircraft model parameters, intentions, etc.

In order to improve this trajectory prediction, we propose to use a hybrid approach based on a coupling of artificial evolution and neural networks.

This research, co-supervised by researchers from ENAC of the IVADO institute (Canada), will build on the existing collaboration between the OPTIM and IVADO research teams, and will be carried out in the framework of the ANITI chair entitled « Artificial Intelligence for Air Traffic Management and Large-Scale Urban Mobility » held by Daniel Delahaye. OPTIM is one of the four units of the ENAC research laboratory. It covers a large part of the optimization thematic spectrum (continuous and mixed optimization, discrete optimization, automatic learning, optimal control and air traffic applications). IVADO is one of the world’s leading institutes in the field of operational research and artificial learning. These entities are currently working on similar issues and have been collaborating for several years. Nicolas Couellan (ENAC/OPTIM/ANITI) and François Soumis (IVADO) will be in charge of the supervision.

**References **

[1] Deep Learning. Goodfellow and Yoshua Bengio and Aaron Courville. MIT Press. 2016

[2] Human-level control through deep reinforcement learning. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Nature, 518(7540):529. 2015

[3] Handbook of Metaheuristics. International Series in Operations Research and Management Science book series. Vol 146, Editors M.Gendreau and J.Y. Potvin. Springer.2019

[4] Evolutionary Algorithms in Theory and Practice : evolution Strategies, Evolutionary Programming, Genetic Algorithms, T. Back. Oxford Univ. Press. 1996.

[5] A hybrid machine learning model for short-term estimated time of arrival prediction in terminal manoeuvring area. Transportation research. Zhengyi Wang, Man Liang, and Daniel Delahaye. Part C, Emerging technologies, 95:280 – 294, October 2018

#### Advanced data-driven techniques for the moment-SOS hierarchy

**Advisors:** Jean-Bernard Lasserre, Milan Korda, Victor Magron (LAAS-CNRS)**e-mail: **lasserre@laas.fr | vmgron@laas.fr | korda@laas.fr**Net salary:** 2 096€ per month with some teaching (64 hours per year on average)**Duration**: 36 months.

**Abstract **

Many problems from various fields (optimization, statistics, dynamical systems, quantum physics) can be addressed by Lasserre’s hierarchy [1,2]. This unified approach solves a series of convex optimization problems of increasing size. In some of these applications, this approach has also required to be able to describe the support of a measurement from a finite number of its moments (precisely the information obtained in the Lasserre hierarchy), thanks to the Christoffel-Darboux kernel. An open question of importance is whether this approach can be applied in a « data driven » context (i.e. governed by the « data ») where the underlying model is unknown but its behaviour is described by some available « observations ». The project consists in exploring this research avenue based on promising early work [3]. Any progress in this direction would establish Lasserre’s hierarchy methodology as a new rigorous tool for certain « big data » applications, and a complement to current ad-hoc heuristic methods with limited mathematical foundations..

**Description **

The goal of this research is the extension of Lasserre’s hierarchy and Christoffel-Darboux’s core to a « data driven » context, i.e. a context governed no longer by an analytical model but rather by data or observations, and typically in some cases, a large amount of these data. It will be a question of developing new methods and, as far as possible, also providing an analysis of the guarantees of convergence, the influence of possible « decoys », etc.

A first research « lead » is to try to extend the approach [4] to measures whose support is contained in certain classes of mathematical varieties. The intention is to apply this methodology to deep learning models (Deep Neural Networks) for which it is commonly accepted that a latent representation of observations corresponds to a variety of relatively small dimensions. For the validation of the results obtained, numerical experiments on several standard test benches are envisaged, including MNIST, CIFAR10, or fashion MNIST.

Other complementary avenues may include (but are not limited to) the analysis of adaptive sampling techniques and the choice of bases for the approach developed in [3], as well as the extension of the proposed method beyond the class of problems initially considered; for example, the extension to data-driven optimal control problems.

Another important direction of research, further upstream, consists in studying the reduction of computational complexity through the exploitation of parsimony and/or symmetry present in large-dimensional applications (a typical big-data context), or any other possible structural property to be discovered according to the type of application considered.

We insist on the large degree of freedom that we wish to grant to the successful candidate so that he can also give free rein to his creativity (as much as possible within the framework described above).

This thesis, co-funded by ANITI, will be co-supervised by Jean-Bernard Lasserre, DR CNRS in the MAC team of the LAAS and holder of the ANITI chair « Polynomial optimization and Christoffel function in ML & data analysis » as well as by Milan Korda and Victor Magron, and associate researchers of the same chair and CR CNRS in the MAC team at LAAS. It is fully part of the activities of the Laboratory – it is indeed part of the investment of LAAS and ANITI in the analysis of the robustness of machine learning algorithms and the study of their convergence properties.

**References **

[1] J.B. Lasserre (2001). Global optimization with polynomials and the problem of moments. SIAM Journal on optimization, 11(3), pp. 796–817.

[2] J.B. Lasserre (2010). Moments, Positive Polynomials and Their Applications. World Scientific, Singapore, 2010.

[3] M. Korda (2019). Data-driven computation of the maximum positively invariant set for nonlinear dynamical systems. FEANICSES workshop, https://cavale.enseeiht.fr/feanicses/files/

W2019/4-korda.pdf.

[4] E. Pauwels, M. Putinar and J.-B. Lasserre (2020). Data analysis from empirical moments and the Christoffel function. Foundations of Computational Mathematics, 2020. hal-01845137

[5] E. Pauwels and J.-B. Lasserre (2019). The empirical Christoffel function with application in data analysis. Advances in Computational Mathematics 45, pp. 1439–1468.

#### Improvement of data assimilation algorithms using latent spaces from machine learning

**Advisors:** Serge Gratton (Toulouse INP, IRIT)**e-mail:** serge.gratton@toulouse-inp.fr**Net salary:** 2 096€ per month with some teaching (64 hours per year on average)**Duration**: 36 months.

**Abstract **

Data assimilation is a proven technique for predicting the state of a system using a mathematical representation of its dynamics and physical observations. It is for example, to cite a well-known application, the technique used to predict the weather in Meteorology, the behavior of oceans or rivers during floods.

This technique is implemented in major forecasting centers (Météo-France, European Center for Medium-Range Forecasting), or in smaller physical systems (EDF for example in Neutronics, for the forecast of the state of power plant cores). It requires significant computing power.

Machine learning is an artificial intelligence technique, which enables prediction tasks to be performed by adjusting the weight of deep neuron networks using a training procedure on data. This technique is in full expansion and its successes invite to consider its application outside its classical fields of use (which are signal and speech processing).

The central question for the thesis is: can we obtain more efficient approaches to achieve predictions in physical systems using artificial intelligence? We are looking here for algorithms that save computing power (memory, electrical energy, computing time) and are more reliable than classical data assimilation techniques.

**Description **

Several directions will be explored in the thesis to accelerate the methods of assimilation of reference data, based on nonlinear model reduction by using artificial intelligence. It will be a question of discovering if a physical system admits an optimized representation in an adapted base. The approach will be based on auto-encoders that will allow to discover a possible latent structure in the dynamics. The data assimilation will then be operated in the latent space by filters able to take into account the approximation error of the real dynamics by the latent dynamics.

More precisely, the thesis will have the following objectives:

- To develop a data assimilation algorithm for problems having a representation in a low dimensional space. An important point will be the statistical characterization of the gap between the real dynamics and the latent dynamics. Several algorithms will be developed, starting from algorithms that only integrate the system in a latent space, to algorithms that perform the entire data assimilation in this space.
- To generalize the algorithms to allow them to automatically detect whether such a latent structure, which is relevant for data assimilation, exists.
- To generalize the following approaches to online system monitoring when latent spaces are time-dependent.

The systems considered for the development and mathematical analysis of the algorithms will be derived from standard problems in data assimilation: Burgers’ system, shallow-water system, Lorentz’s system.

These algorithms will be exercised on problems where the aim is to predict physical fields that follow partially known laws because they are parametric or subject to errors with known statistics. The results will be implemented in commonly used platforms such as Pytorch.

This thesis is part of the Data Assimilation and Machine Learning Chair of the 3IA ANITI Institute.

The work will take place in Toulouse, within the institute in collaboration between the chairs of data assimilation and learning under physical constraints. The case studies will be representative case studies of forecasting problems in chaotic systems as encountered in meteorology) or non chaotic systems encountered in industrial design (aeronautics for example).

The student will be co-supervised by the main chair holder (S. Gratton, optimization and data assimilation) in cooperation with co-supervisors specialized in data assimilation and deep learning.

#### Towards sophisticated conversational assistants for aeronautics

**Advisors:** Nicholas Asher, Philippe Muller (IRIT)**e-mail:** Nicholas.asher@irit.fr**Net salary:** 2 096€ per month with some teaching (64 hours per year on average)**Duration**: 36 months.

**Abstract **

The objective of the thesis is to significantly improve the performance of current solutions in order to reliably extract relevant semantic and pragmatic information from a conversation. The extraction of this information strongly depends on the discourse context of the conversation and on the way the information of a statement is integrated in this context.

The activities that will be carried out in this thesis will consist in particular in transcribing a speech recognition output to a command executable by the system, and in monitoring the correct transcription, in particular by providing a confidence indicator on the understanding of the speech command.

**Context and Challenges:**

The proposed use case for this subject concerns the aeronautical environment. Aircraft manufacturers and equipment manufacturers are planning to develop systems that perform various tasks on voice commands from operators or the control tower (air traffic control or ATC). These systems must be able to understand and correctly execute the tasks required. While there are currently powerful speech recognition solutions that can accurately transcribe a spoken sentence into a text, their performance is limited when it comes to extracting a somewhat complex goal or request from the sentence.

The average performance of complex request-to-action transcription systems stems directly from the fundamental difficulty of reliably extracting relevant semantic and pragmatic information from a conversation. This is largely due to the fact that the understanding of a conversation element, and thus the extraction of semantic and pragmatic information, is highly dependent on the discourse context of the conversation and on the way the information of an utterance is integrated in this context.

**To illustrate, consider the following query:**

« Houston ATC: N8937Y, turn left 240 direct Alexandria, then Husdzy four arrival into IAH. Descend and maintain FL310; expect lower in 5 minutes ».

In this communication, the understanding of the request involves the following elements:

- The system must already be certain that the aircraft is heading towards the Houston airport. If not, it must immediately request clarification from ATC.
- The system needs to understand that, from its current position, the aircraft must first turn left, then proceed directly to the Alexandria VOR (and that this heading will bring it to the desired VOR), and finally, execute the STAR HUDZY4 approach, which it must confirm it has in its repertoire.

This relatively classic example illustrates several key points of the problem. Firstly, an aspect of anchoring concepts and objects to the system’s knowledge base. Second, we see information inferred from the context. For example, the sequence of actions just described is not signaled by verbal tenses but by a discursive connector « then ». Third, the actions in this communication do not just form a simple sequence. The second sentence « Descend and maintain FL310; expect lower in 5 minutes » elaborates a facet of the trajectory. On its way to Alexandria, the aircraft must descend from its current altitude to 31,000 feet and plan a further descent in 5 minutes. This parallelism or « elaboration » of the plan is discursively inferred and is pragmatic information that dictates how to integrate the information contained in the statement.

Finally, in this framework, the exploitation of the conversational context is essential to guarantee the quality of the information extracted from a conversational tour. Such exploitation allows for questions, corrections or revisions of what an interlocutor has understood with dialogical devices such as questions of clarification. The purpose of these questions is to ensure that the interlocutor can confirm with the speaker the information he or she has extracted from the message or correct it if necessary.

Mastering these dialogical means would guarantee the transmission of semantic and pragmatic information in the context of a critical system.

Although we now have a good theoretical understanding of conversational tools that allow a mastered understanding of semantic and pragmatic information in a conversation [1]-[3], there are still difficulties in implementing these tools as is the case for many semantic inferences [4].

__Objectives of the thesis :__

The framework that we envisage is limited firstly by the fact that the tasks that motivate or provoke verbal communications can be cleared in advance, as ATC needs to communicate about weather conditions, changes in frequency, the prescribed route (ATC clearance), the modification of the flight path – commands to change altitude, direction or possibly part of the route.