Goals of the chair
Embedding of ML model algorithms in safety-critical products, in particular in the aeronautical domain subject to stringent certification, is a significant need of transportation industries. For the past few years, several certification standards have been under development to define the objectives and activities enabling their use in the aeronautical field: - the roadmap and guidance for developing ML-based systems written by the European certification authority EASA, and - the yet to be published aeronautical guidelines — ED 324 / ARP6983 — developed by the joint working group EUROCAE WG-114/SAE G-34. Note that several members of the chair are actively participating in this standardization group.
Principal investigators
- Claire Pagetti (DR ONERA, ONERA,
- Mélanie Ducoffe (AI research engineer, Airbus)
- Adrien Gauffriau (AI Expert, Airbus)
CertifEmbAI aims at bridging some of the gaps brought by certification and safety constraints. CertifEmbAI particularly focuses on the embeddability and verifiability aspects of ML-based systems. The approach promoted by CertifEmbAI covers the range of compressing / developing tiny ML models that are formally verified towards the system and safety requirements up to their low-level execution on complex hardware accelerators such deep learning accelerators or GPUs while ensuring the semantics preservation of the off-line trained model on the final hardware platform.
Co-chairs
-
-
Thomas Carle (MCF IRIT-UT3)
-
Kevin Delmas (Researcher ONERA)
-
Jérémie Guiochet (Prof. LAAS-UT3)
For that purpose, four challenges have been identified. The first consists in feeding the ML training activities with constraints coming from the hardware target and the verification needs. Taking these needs into account greatly facilitates efficient deployment on hardware accelerators (for example, by choosing an appropriate tensor size or type of convolution) and can reduce verification time.
The second concerns the proper description of ML models for exchanging them between frameworks and for interfacing them with usual parallel programming paradigms. The main purpose is to ensure semantic preservation of the trained model on the final target. Initial work was done on NNEF format and more recently some members of the chair contribute to the ONNX safety-related profile (see 1) below).
The third challenge addresses the extensive use of formal methods to verify the correctness of the ML-based system either off-line and at runtime to ensure that no catastrophic situations will be reached due to a poor performance. Formal methods are recognized as an acceptable means in certification and have shown already strong capabilities to track errors and reduce development costs. Mélanie Ducoffe is developing the verifier DECOMON (see 2) below). Off-line formal verification of small dimension neural networks has been successfully applied during ANITI 3AI Institute. For the cluster, the chair tackles more complex ML task based on computer vision (e.g. object detection). In addition to off line verification, on-line fault tolerance mechanisms, which we refer to as runtime monitors, are analyzed and designed to maintain an acceptable behavior during operation despite perception errors (i.e. to remain in the ODD – operational design domain).
Finally, the last challenge tackles the predictable and safe deployment of ML models on complex hardware. Among the outcome of the chair are the predictable C code generator ACETONE (see 3) below) and the standalone compiler (see 4) below).
The research activities will be supported by four industrial use cases, namely ACAS Xu, vision-based landing for aircraft, vision-based emergency landing for UAV and counter UAV-systems. The chair is an active contributor of the LARD (Landing Approach Runway Detection) dataset (see 1) below).
- Methodology for designing small and tiny ML models, possibly with quantization
- Description formelle des modèles pour une implémentation sans ambiguïté et efficace.
- Safety assurance with formal verification and real-time monitoring.
- Predictable implementation
- Compliance with industrial standards and application on real use cases
- AIRBUS as the primary industrial partner
- Staff involvement
- CIFRE Thesis (2023-2026). Noémie Cohen, Formal verification of high-dimensional supervised ML systems, supervised by M. Ducoffe, C. Gabreau, C. Pagetti, and X. Pucel.
- CIFRE Thesis (2023-2026). Anthony Faure-Gignoux, Robustness of neural networks to avionic equipment failures, supervised by K. Delmas, A. Gauffriau, and C. Pagetti.
- Airbus Helicopters
- Staff involvement
- Framework for high-integrity machine learning implementation, planned PhD thesis
- CS Group
- Planned CIFRE Thesis, Methodology for optimizing ML implementation on hardware, combining quantification, formal methods, and low-level programming strategies.
- Methodology for deploying and implementing machine learning-based systems compliant with certification standards.
- Formal verification strategies for high dimension problems and data-oriented specification
- Runtime verification approaches for safety-critical systems
- Predictable parallel implementation on CPUs, GPUs and accelerators with associated code generators