Goals of the chair

Embedding of ML model algorithms in safety-critical products, in particular in the aeronautical domain subject to stringent certification, is a significant need of transportation industries. For the past few years, several certification standards have been under development to define the objectives and activities enabling their use in the aeronautical field: - the roadmap and guidance for developing ML-based systems written by the European certification authority EASA, and - the yet to be published aeronautical guidelines — ED 324 / ARP6983 — developed by the joint working group EUROCAE WG-114/SAE G-34. Note that several members of the chair are actively participating in this standardization group.

Principal investigators

  • Claire Pagetti (DR ONERA, ONERA,
  • Mélanie Ducoffe (AI research engineer, Airbus)
  • Adrien Gauffriau (AI Expert, Airbus)

CertifEmbAI aims at bridging some of the gaps brought by certification and safety constraints. CertifEmbAI particularly focuses on the embeddability and verifiability aspects of ML-based systems. The approach promoted by CertifEmbAI covers the range of compressing / developing tiny ML models that are formally verified towards the system and safety requirements up to their low-level execution on complex hardware accelerators such deep learning accelerators or GPUs while ensuring the semantics preservation of the off-line trained model on the final hardware platform.

Co-chairs

For that purpose, four challenges have been identified. The first consists in feeding the ML training activities with constraints coming from the hardware target and the verification needs. Taking these needs into account greatly facilitates efficient deployment on hardware accelerators (for example, by choosing an appropriate tensor size or type of convolution) and can reduce verification time. The second concerns the proper description of ML models for exchanging them between frameworks and for interfacing them with usual parallel programming paradigms. The main purpose is to ensure semantic preservation of the trained model on the final target. Initial work was done on NNEF format and more recently some members of the chair contribute to the ONNX safety-related profile (see 1) below). The third challenge addresses the extensive use of formal methods to verify the correctness of the ML-based system either off-line and at runtime to ensure that no catastrophic situations will be reached due to a poor performance. Formal methods are recognized as an acceptable means in certification and have shown already strong capabilities to track errors and reduce development costs. Mélanie Ducoffe is developing the verifier DECOMON (see 2) below). Off-line formal verification of small dimension neural networks has been successfully applied during ANITI 3AI Institute. For the cluster, the chair tackles more complex ML task based on computer vision (e.g. object detection). In addition to off line verification, on-line fault tolerance mechanisms, which we refer to as runtime monitors, are analyzed and designed to maintain an acceptable behavior during operation despite perception errors (i.e. to remain in the ODD – operational design domain). Finally, the last challenge tackles the predictable and safe deployment of ML models on complex hardware. Among the outcome of the chair are the predictable C code generator ACETONE (see 3) below) and the standalone compiler (see 4) below). The research activities will be supported by four industrial use cases, namely ACAS Xu, vision-based landing for aircraft, vision-based emergency landing for UAV and counter UAV-systems. The chair is an active contributor of the LARD (Landing Approach Runway Detection) dataset (see 1) below).

Ne manquez rien !

Inscrivez-vous pour recevoir l'actualité d'ANITI chaque mois.

Nous n’envoyons pas de messages indésirables !

en_GBEnglish