Comment candidater ?
Envoyez votre CV détaillé, une lettre de motivation et une copie de vos diplômes à aniti-postdoc@univ-toulouse.fr.
Des exemples de vos publications scientifiques et des lettres de recommandation seront un plus.
Les offres
Coopération Humain – Machine
Encadrement : ICAM – Pôle enseignement supérieur et recherche du site de Toulouse
Durée : 18 mois
> Description du poste (pdf)
Sciences cognitives : consultation citoyenne
Encadrement : Service de diffusion de la culture scientifique et technique de l’Université fédérale Toulouse Midi Pyrénées
Salaire : 34 000 Brut/annuel
Durée : 20 à 24 mois
Contexte
Dans le cadre du projet COCACIA (Consultation citoyenne sur les connaissances, l’acceptabilité et les enjeux éthiques de l’IA et des données) de l’institut interdisciplinaire d’intelligence artificielle de Toulouse (ANITI), nous proposons un contrat de recherche post-doctoral d’une durée de 20 à 24 mois, à compter de mars 2021.
Le projet COCACIA a pour ambition d’établir un état des lieux des connaissances et de l’acceptabilité des habitants (principalement) de la région Occitanie, par rapport à l’usage des données, des algorithmes et des enjeux éthiques en lien avec l’IA. Plus spécifiquement, une consultation sera lancée auprès de quatre populations : les scolaires, les jeunes adultes (lycéens et étudiants), les professionnels de l’IA (salariés du domaine industriel) et les citoyens au sens large. De par son originalité, à savoir sa construction en quatre enquêtes distinctes, cette consultation citoyenne sera probablement la plus grande enquête sur ce sujet réalisée en France. Par son approche, ce projet de recherche interdisciplinaire et collaboratif contribuera à souligner les contours des connaissances et de l’appréhension des individus par rapport à IA et à l’usage des données. Il fournira par ailleurs des données, exploitables par les scientifiques, sur des thématiques plus spécifiques qui seront ensuite valorisées par des publications. Les questions de recherche porteront sur l’acceptation sociale d’une nouvelle technologie, tout comme sur la confiance de la population envers elle et l’utilisation des données personnelles qui en résulte, qui sont essentielles pour sa diffusion dans la société. Les questions de recherche seront approfondies en lien avec l’appétence scientifique de la personne recrutée.
Runtime Verification for Critical Machine Learning Applications
Advisors: Kevin Delmas, kevin.delmas@onera.fr | Jérémie Guiochet, jeremie.guiochet@laas.fr
Net salary: 2 600€ per month with some teaching (20 hours per year on average)
Duration: one year (renewable once)
Description
In the last decade, the application of Machine Learning (ML) has encountered an increasing interest in variousapplicative domains, especially for a wide panel of complex tasks (e.g. image-based pedestrian detection)classically performed by human operators (e.g. the driver). In ML, the objective is to synthesize an intendedfunction (e.g., detect a pedestrian on an image) through a set of examples (images of road). The massive usage ofsuch techniques has demonstrated its effectiveness way beyond other classical methods. Obviously, the designersof critical systems would like to benefit from the effectiveness of ML-based models mainly for complex imageprocessing and model reduction.
But, above effectiveness, the designers of critical systems must demonstratethat the obtained models are reasonably safe. Providing elements demonstrating the safety of a system is aclassical issue addressed by various techniques tailored to the nature of the system, and covered by many safetystandards (DO178C in aeronautics, ISO26262 in automative, IEC61508 in electronics, etc.). Nevertheless, thespecificities of ML-based software disclose new safety threats that are not addressed by classical techniques.Despite very good results during training and testing of a ML software, it is not possible to provide sufficientguarantees that the training data set would be sufficient for expected real life situations, that during operationallife the system may not face adversary situations (situation slightly different from training ones, but which leadto a complete different result of the ML, also called adversaries attacks), or that the distribution of situationsmay be different from the ones during training (distribution shift). All these threats are a major brake to MLdeployment in safety critical applications.
Most of works are focusing on the training data quality in order to increase robustness of the ML algorithm. However, to avoid overfitting, it is accepted that developing the dataset is limited, and a promising approachis to monitor the system at runtime, during operational life, in order to keep the system in a safe state, despiteerrors of the ML. A first approach inspired from fault tolerance and close to safety monitoring [3], is to adaptthe simplex architecture to the monitoring of a neural controller, using a decision block able to detect an errorand to switch from the neural controller to a high assurance controller (but less performant) [4]. Some worksas [1] may be used to monitor the distance of input/ouput distribution during the exploitation vs the oneobserved during the learning phase and raise an alert when a “significant gap” is observed. Other works like[2], dedicated to Neural Networks, propose to collect neuron activation patterns during the learning phase and,thanks to an online monitoring, detect the occurrence of an unknown activation pattern that may indicate anerroneous prediction.
All these works are ongoing, with very preliminary results, and actually no safety model is integrated in suchproposals. We propose in this post-doc to specify, implement and verify a new runtime verification approachfor ML, an adversarial runtime monitor. This approach is based on adversaries generated at runtime, and usedto assess if the ML maybe fooled in an unsafe state. This might lead the monitor to detect if the ML is ina potential unsafe erroneous state, or in a potential erroneous state but safe. Once such a monitor would bedesigned, we also plan to use formal methods (verification) to prove the correctness of the monitor. This workwill be applied to a case study, a ML software for drone collision avoidance studied and deployed in the contextof theDeltaproject [the code of theDeltaproject is available at https://github.com/delta-onera/delta_tb/tree/master/workspace/isprs]
References
[1] Mahdieh Abbasi, Arezoo Rajabi, Azadeh Mozafari, Rakesh B. Bobba, and Christian Gagn ́e. Controlling over-generalization and its effect on adversarial examples generation and detection.ArXiv e-prints, 1808.08282,8 2018.
[2] Chih-Hong Cheng, Georg N ̈uhrenberg, and Hirotoshi Yasuoka. Runtime monitoring neuron activation pat-terns.CoRR, abs/1809.06573, 2018.
[3] Mathilde Machin, Jérémie Guiochet, H ́el`ene Waeselynck, Jean-Paul Blanquart, Matthieu Roy, and LolaMasson. SMOF – A Safety MOnitoring Framework for Autonomous Systems.IEEE Transactions on Systems,Man, and Cybernetics: Systems, 48(5):702–715, May 2018.
[4] Dung Phan, Nicola Paoletti, Radu Grosu, Nils Jansen, Scott A. Smolka, and Scott D. Stoller. Neural simplexarchitecture.ArXiv e-prints, abs/1908.00528, 2019.
Behavioral ethics for algorithmic decision making
Advisor: Jean-François Bonnefon, jfbonnefon@gmail.com
Net salary: minimum of 2400€ /month
Duration: three years
A postdoctoral position for a social or cognitive scientist is available at the Artificial and Natural Intelligence Toulouse Institute, France (ANITI), to begin in September 2020 under the supervision of Dr. Jean-François Bonnefon (CNRS, Toulouse School of Economics, ANITI, and Institute for Advanced Study in Toulouse).
This is a 3-year position related to the ethics of Artificial Intelligence, and specifically to the design of experiments allowing non-expert citizens to express their preferences about the way algorithms make decisions which can substantially impact their well-being and life outcomes. Special attention will be given to situations that involve a tradeoff between mutually inconsistent objectives, issues of fairness, or the distribution of a limited resource. The postdoctoral researcher will design experiments built on interfaces which allow participants to visually understand ethical dilemmas and provide an informed preference on the way they should be solved. The researcher will also contribute to the analysis of the data and the preparation of the resulting publications. The project will be carried out in a multidisciplinary context, involving experts from psychology, anthropology, computer science, law, political science, and economics. Funds for conducting the research will be available, as well as opportunities to visit other labs and present the research at conferences.
The preferred candidate would have the following qualifications:
- PhD in social or cognitive science (psychology, economics, political science, anthropology, etc.) already completed or expected for Spring/Summer 2020
- Experience in the design and conduct of behavioral experiments
- Proficiency with the R programming language, excellent skills in data visualisation
- Experience with multidisciplinary collaborations
- Strong interest in the behavioural study of moral preferences and/or in the ethical challenges raised by Artificial Intelligence
- Fluent spoken and written English (speaking French is NOT a requirement or an asset for the position)
Please send your application by email to jfbonnefon@gmail.com. Your application should include a detailed CV (with your list of publications), a 2-page statement of interest, and the contact information of two individuals who can be contacted for a reference letter.
From machine learning to knowledge compilation and back
Advisors: Hélène Fargier, fargier@irit.fr – Jérôme Mengin, mengin@irit.fr
Net salary is about 2 600 euros per month, with somme teaching (64 hours per year in average)
Duration: 2 years
DESCRIPTION
Machine Learning (ML) and Knowledge Compilation (KC) are two very rich research areas in Artificial Intelligence. Interactions between ML and KC have however been little explored; the objective of this post doc is to fill this gap.
In ML, the objective is to find a model that explains, and generalizes, some input data. The choice of a target language, that defines the class of models in which to search for the output model, is crucial: if the class is too small, there may be no model close enough to the unknown one from which the observed data have been generated; if the class is too large, the size of the input data required to precisely identify the output model may be unrealistically large. Research in ML has led to the design of powerful tools to characterize the expressivity of target languages for the output models, and their learnability, like the VC dimension, the Rademacher complexity, the PAC model [Blumer et al., 1989, [Bartlett and Mendelson, 2001]]. Besides, the need to learn complex, often graphical, models, like Bayesian Networks, has highlighted the importance of studying how the computational complexity of the output model can be caped; for instance recent works have proposed to constrain the learning process in order to output Bayesian Networks with a low tree-width.
In KC, the objective is to compile the fixed part F of a knowledge base, initially expressed in some very expressive but intractable language, so that the resulting model C(F) enables faster query answering modulo (with respect to) a set V of varying pieces of knowledge. Here again, the choice of the target language, with which the resulting model should be expressed, is crucial. It should be rich enough to enable the exact representation of the knowledge base (or an approximation of it if this is sufficient for some intended application); but small enough to guarantee fast inference, in particular in domains where the final application can be on-line. Besides efficient compilation methods, research in KC highlighted the need for KCmaps, which provide multi-criteria analysis of target languages for KC. Rich KC maps have already been established for propositional languages. Current, promising research directions aim at extending these maps towards richer, valued languages; and at studying in more detail approximate compilation.
The objective or this 2 years post doc is to explore the potential interactions between KC maps and tools established to study the complexity of ML. In both KC and ML the expressivity of the arget languages, and the difficulty to infer or induce models in these languages, are of prime importance. But there is, to date, little work on this interaction. Directions of research include the study of what ML tools can add to KC maps for rich, valued languages necessary to deal with the processing of complex pieces of knowledge, like preferences; and, conversely, a study of how KC maps can be useful to choose target languages for ML problems.
The ideal candidate will have a strong background in machine learning and in knowledge representation. This research will be conducted within the stimulating environment of the Artificial and Natural Intelligence Toulouse Institute.
References
• Adnan Darwiche, Pierre Marquis: A Knowledge Compilation Map. J. Artif. Intell. Res. 17: 229-264 (2002)
• A. Blumer , A. Ehrenfeucht, D. Haussler, and M. K. Warmuth: Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36(4):929–965, 1989.
• P. L. Bartlett and S. Mendelson: Rademacher and Gaussian Complexities: Risk Bounds and Structural Results. In COLT/EuroCOLT, vol. 2111 of Lecture Notes in Computer Science, pp. 224–240. Springer, 2001.
Modeling shift from efficient to inefficient divided attention using EEG/fmRI/MEG
Advisors: Fédéric Dehais, dehais@isae.fr – Caroline Chanel, caroline.chane@isae.fr – Daniel Callan, daniel.callan@isae.fr
Net Salary: 2700€ per month with some teaching
Duration: 2 years
DESCRIPTION
Optimal distribution of attention is a key issue in our everyday-life multitasking activities. It relies on a tradeoff between exploration and exploitation attentional policies to select and maintain attentional focus on the relevant streams of information while remaining alert to unexpected changes. Several studies have identified some neural correlates supporting such attentional dynamics. For instance, top-down and bottom-up types of attention are respectively delineated by the dorsal and ventral neural networks that are in close interaction with the anterior cingulate cortex for resource allocations. Efficient divided attention results in an enhancement of task relevant networks activity via cross frequency coupling in the theta and gamma band and enhancement of secondary task networks activity at different phase to that of primary task networks. However, when task demand exceeds mental capacity, the homeostasis between the ventral and dorsal pathways is disrupted, leading to the suppression of non-primary task relevant network (via increased alpha oscillations) and decreased frequency coupling between theta and gamma in primary task networks. Although this shielding mechanism can prevent from mental overload and distractions, missing critical information can have devastating consequences in real-life scenarios such as driving or operating an aircraft (eg. auditory alarms).
The candidate is expected to design and conduct experiments to investigate the shift from efficient to non-efficient divided attention between the visual and the auditory modality. These experiments will be conducted 1) in the lab using fMRI and high-density 2) under highly ecological conditions with portable EEG.
The candidate is expected to perform state-of-the-art analyses including effective connectivity and to apply inverse reinforcement learning (IRL) technics to 1) estimate the efficient (optimal) and non-efficient (sub-optimal) policies with respect to best expected distribution of attention. 2)to predictlong-termattentional efficiency.
The ideal candidate will have a strong background in Neurosciences, brain imaging (EEG or/and fMRI/MEG), signal processing, artificial intelligence for automated learning and planning. She/he will have to work in strong collaboration with the three other researchers (2 PhD students, 1 post doc) funded by the ANITI program. This research will be conducted within the stimulating environment of Neuroergonomics lab at ISAE-SUPAERO (25 researchers), the Artificial and Natural Intelligence Toulouse Institute. The candidate will have the opportunity to have a long stay in Cinet (Osaka/Japan) at Daniel Callan’s research department to conduct the fMRI or MEG experiment.
References
•Durantin, G., Dehais, F., Gonthier, N., Terzibas, C., & Callan, D. E. (2017). Neural signature of inattentional deafness. Human brain mapping, 38(11), 5440-5455.
•Dehais, F., Rida, I., Roy, R. N., Iversen, J., Mullen, T., & Callan, D. A pBCI to Predict Attentional Error Before it Happens in Real Flight Conditions.
•Tombu, M. N., Asplund, C. L., Dux, P. E., Godwin, D., Martin, J. W., & Marois, R. (2011). A unified attentional bottleneck in the human brain. Proceedings of the National Academy of Sciences, 108(33), 13426-13431.
•Doesburg, S. M., Roggeveen, A. B., Kitajo, K., & Ward, L. M. (2007). Large-scale gamma-band phase synchronization and selective attention. Cerebral cortex, 18(2), 386-396.
•Arora, Saurabh and Doshi, Prashant (2018). A survey of inverse reinforcement learning: Challenges, methods and progress. arXiv preprint arXiv:1806.06877
Implementation of neroadaptive technology to optimize humanS-machineS teaming.
Advisors: Fédéric Dehais, dehais@isae.fr – Caroline Chanel, caroline.chane@isae.fr – Nicolas Drougard, nicolas.drougard@isae.fr
Net Salary: 2700€ per month with some teaching
Duration: 2 years
DESCRIPTION
The development of autonomous systems (e.g.: aircraft, highly automated cars, robots) are becoming increasingly present in a wide variety of operational context and in everyday life situations. Most of scientific and technical efforts have focused on the implementation of AI and smart sensors. However, these developments are generally achieved without questioning the integration of the human in the control/decision loop: the human operator is considered as a providential agent that will be able to take over when sensors or automations fails. Accidents analysis in many critical domains (eg. aviation, nuclear power plant, high frequency trading) highlight that human-artificial agentinteractions breakdown represent one major contributive factors to recent industrial disasters. A promising avenue to deal with these issues is to consider that artificial and human agents have complementary skill/abilities and are likely to provide better performance when joined efficiently than when used separately. This approach, known as mixed-initiative, defines the role of the human and artificial agents according to their recognized skills. The implementation of such an interaction presuppose 1) to develop passive Brain Computer Interface (pBCI) also known as Neuro-adaptive technology to “sense” human performance and 2) to implement a decision system dedicated to dynamically adapt human-machine interactions.
The objectives of this post-doctoral position are:
•To design Neuro-adaptive technology to measure multiple users’ brains while interacting with each other and several artificial agents. This recent approach, referred as Hyperscanning, consists of the continuous and synchronous monitoring of at least two brains with portable brain imaging sensor.
•To develop a decision-making unit that considers uncertainties on actions, or potentially non-deterministic behavior of the humans, and partial observable states (e.g. degraded mental states, artificial agent perception). This mixed-initiative driving system will be governed by a policy or a strategy that would maximize the overall man-machine teaming performance.The candidate is expected to design an experimental protocol with at least two humans interacting with two artificial agents and to drive the interactions between natural and artificial entities in an-online fashion.
The ideal candidate will have a background in Neurosciences and Artificial Intelligence for automated learning and planning. She/he will have to work in strong collaboration with the three other researchers (2 PhD students, 1 post doc) funded by the ANITI program. This research will be conducted within the stimulating environment of Neuroergonomics lab at ISAE-SUPAERO (25 researchers), the Artificial andNatural Intelligence Toulouse Institute.
References
•De Souza, P. E. U., Chanel, C. P. C., & Dehais, F. (2015, November). MOMDP-based target search mission taking into account the human operator’s cognitive state. In 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI) (pp. 729-736). IEEE.
•Babiloni, F., & Astolfi, L. (2014). Social neuroscience and hyperscanning techniques: past, present and future.Neuroscience & Biobehavioral Reviews,44, 76-93.
•Zhang, D., Lin, Y., Jing, Y., Feng, C., & Gu, R. (2019). The Dynamics of Belief Updating in Human Cooperation: Findings from inter-brain ERP hyperscanning.NeuroImage,198, 1-12.
•Charles, J. A., Chanel, C. P., Chauffaut, C., Chauvin, P., & Drougard, N. (2018, December). Human-Agent Interaction Model Learning based on Crowdsourcing. InProceedings of the 6th International Conference on Human-Agent Interaction(pp. 20-28). ACM.
Pushing the computational frontiers of reasoning with logic, probabilities and preferences
Advisors: simon.de-Givry@inra.fr, thomas.schiex@inra.fr
Net Salary: 2600€ per month with some teaching
Duration: from one to four years
DESCRIPTION
The development of autonomous systems (e.g.: aircraft, highly automated cars, robots) are becoming increasingly present in a wide variety of operational context and in everyday life situations. Most of scientific and technical efforts have focused on the implementation of AI and smart sensors. However, these developments are generally achieved without questioning the integration of the human in the control/decision loop: the human operator is considered as a providential agent that will be able to take over when sensors or automations fails. Accidents analysis in many critical domains (eg. aviation, nuclear power plant, high frequency trading) highlight that human-artificial agentinteractions breakdown represent one major contributive factors to recent industrial disasters. A promising avenue to deal with these issues is to consider that artificial and human agents have complementary skill/abilities and are likely to provide better performance when joined efficiently than when used separately. This approach, known as mixed-initiative, defines the role of the human and artificial agents according to their recognized skills. The implementation of such an interaction presuppose 1) to develop passive Brain Computer Interface (pBCI) also known as Neuro-adaptive technology to “sense” human performance and 2) to implement a decision system dedicated to dynamically adapt human-machine interactions.
The objectives of this post-doctoral position are:
•To design Neuro-adaptive technology to measure multiple users’ brains while interacting with each other and several artificial agents. This recent approach, referred as Hyperscanning, consists of the continuous and synchronous monitoring of at least two brains with portable brain imaging sensor.
•To develop a decision-making unit that considers uncertainties on actions, or potentially non-deterministic behavior of the humans, and partial observable states (e.g. degraded mental states, artificial agent perception). This mixed-initiative driving system will be governed by a policy or a strategy that would maximize the overall man-machine teaming performance.The candidate is expected to design an experimental protocol with at least two humans interacting with two artificial agents and to drive the interactions between natural and artificial entities in an-online fashion.
The ideal candidate will have a background in Neurosciences and Artificial Intelligence for automated learning and planning. She/he will have to work in strong collaboration with the three other researchers (2 PhD students, 1 post doc) funded by the ANITI program. This research will be conducted within the stimulating environment of Neuroergonomics lab at ISAE-SUPAERO (25 researchers), the Artificial andNatural Intelligence Toulouse Institute.
References
•De Souza, P. E. U., Chanel, C. P. C., & Dehais, F. (2015, November). MOMDP-based target search mission taking into account the human operator’s cognitive state. In 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI) (pp. 729-736). IEEE.
•Babiloni, F., & Astolfi, L. (2014). Social neuroscience and hyperscanning techniques: past, present and future.Neuroscience & Biobehavioral Reviews,44, 76-93.
•Zhang, D., Lin, Y., Jing, Y., Feng, C., & Gu, R. (2019). The Dynamics of Belief Updating in Human Cooperation: Findings from inter-brain ERP hyperscanning.NeuroImage,198, 1-12.
•Charles, J. A., Chanel, C. P., Chauffaut, C., Chauvin, P., & Drougard, N. (2018, December). Human-Agent Interaction Model Learning based on Crowdsourcing. InProceedings of the 6th International Conference on Human-Agent Interaction(pp. 20-28). ACM.