The increasing integration of artificial intelligence into aviation introduces new opportunities for safety enhancement, operational efficiency and more advanced automation. This keynote examines the evolving regulatory and certification framework shaped by the European Union Aviation Safety Agency (EASA), whose AI Roadmap establishes a progressive and risk-based approach towards trustworthy AI deployment across the impacted aviation domains. In alignment with the EU AI Act, EASA promotes a harmonized approach grounded in robust AI assurance, human oversight and transparency, while acknowledging the need for proportionality for the diverse operational contexts of aviation.
Guillaume Soudain has been working at the European Union Aviation Safety Agency (EASA) since 2006, starting his career as an Expert in Software and Airborne Electronic Hardware within the Certification Directorate. In 2014, after being promoted to Senior Software Expert, he took charge of coordinating software certification policies at the Agency, before becoming the EASA’s Artificial Intelligence (AI) Programme Manager.
Guillaume indeed initiated the creation of the Agency’s AI roadmap and has been leading its implementation since 2019. As the head of EASA’s AI Programme since 2022, he plays a key role in innovation and in the deployment of a trustworthy AI framework to facilitate the safe integration of AI in aviation, through his leadership of EASA’s action plan on AI.
He also represents EASA in the joint EUROCAE WG-114 / SAE G-34 working group on AI. Previously, he was an active contributor to the EUROCAE WG-71/RTCA SC-205 committee, which ensures the production and updating of the ED-12C/DO-178C software standard and its associated documents, and served as EASA’s representative in the Forum for Aeronautical Software (FAS).
Before joining EASA, Guillaume enriched his experience for five years, from 2001 to 2006, as a Software Engineer in the development of automatic flight control systems for the European rotorcraft industry.
Critical Embedded Systems (CES) are embracing autonomy across automotive, space, avionics, and robotics, fueling the demand for performance-hungry AI software. Multi-Processors System-on-Chip (MPSoCs) offer the needed computing performance, yet their complexity, alongside AI software’s intricacies, presents significant hurdles for functional safety, particularly in software timing Verification and Validation (V&V). The core challenge stems from unpredictable resource contention amongst applications sharing MPSoC hardware, potentially leading to severe performance impacts. In this talk I explore two approaches to mitigate timing risks on complex MPSoCs. First, I will examine software-only techniques for addressing key timing V&V challenges: generating stressful scenarios for Worst-Case Execution Time (WCET) estimation, enabling WCET analysis in multi-provider environments with IP restrictions, and monitoring contention among tasks. And second, in terms of hardware solutions, I will cover strategies to extend high-performance MPSoCs with features to enable their use in safety-relevant scenarios without impacting performance including several modules for increased observability and quota control, and modules for flexible performance testing.
Francisco J. Cazorla is director of the HPES (High-Performance Embedded Systems) Laboratory at the Barcelona Supercomputing Center (BSC) that currently counts more than 60 members. Dr. Cazorla’s research interests cover hardware and software designs and associated analyses for multicore-based high-performance and high-integrity systems. On these topics Dr. Cazorla has coordinated several EU-funded research projects (PROARTIS, PROXIMA, MASTECS), projects funded by the European Space Agency, and bilateral projects between BSC and industry (e.g. Airbus, Thales, Rockwell Collins, IBM, Intel, …). Dr. Cazorla has been ERC Consolidator grant holder and in early 2020 he co-founded Maspatechnologies S.L., a spin-off from BSC focused on timing analysis for real-time multi-core systems. Maspatechnologies S.L. was sold to DANLAW Inc in November 2022.
Given the rapid evolution of threats on the battlefield, our defense systems need to adapt quickly, taking advantage of new technologies such as Artificial Intelligence (AI).
As a result, defense system engineers face many challenges including agility within engineering processes, safe integration of AI modules and semi-autonomous coordination between individual systems in order to gain mass against the enemy.
We can thus observe a paradigm shift towards more network-centric architectures with the emergence of new “systems of systems (SoS)” in every domain (land, air or sea) which exhibit new properties with respect to individual complex systems: emerging collective behaviours and extended ranges, multiple concurrent lifecycles and separate program management frameworks while preserving a form of operational independence between SoS elements.
In this context, we will analyze how embedded systems need to evolve in order to gain modularity, to better store, process and exchange large amounts of data and to ensure safe collaborative actions between platforms, based on AI.
Brig. Gal. (Armament) Delphine Dufourd-Moretti originally graduated as an image processing and robotics engineer (X1995, ENSTA 2000), with a PhD addressing simultaneous localization and mapping for robotic systems (INP Toulouse 2005).
She has occupied various positions within DGA (Direction Générale de l’Armement, which performs procurement within the French Ministry of Defense) as head of several technical departments before being appointed SCORPION System of systems program manager and later director of the department in charge of contracting R&T studies for the land domain. During her latest position as a future land combat systems architect, she was in charge of preparing future capabilities, orienting R&T studies as well as fostering international cooperation projects in this field.
She is currently head of the “Systems of Systems (SoS)” division which brings together about 500 people spread across the different DGA technical centers and covers the areas of modeling and simulation, system and SoS engineering, testing equipment engineering as well as safety assessment regarding critical components and software.