ERTS

Full program

Thursday, February 5

08:00 -09:00 - Reception / Welcome Coffee (Exposition Hall - Level 1)

09:00 - Opening & Keynote Session

Cassiopée Room

09:00

Conference Opening Session

Mohamed Kaâniche
ERTS chair, LAAS-CNRS Director

09:30

Industrial Co-chair Session

Matthieu Gallas, Airbus, VP Engineering, Fast Track leader Autonomy
Julien Batistton, NXP, Senior Director Marketing NXP CoreRide Software Define vehicle

10:00 - 10:30 - Coffee Break (Exposition Hall - Level 1)

Cassiopée Room

10:30

Guillaume Soudain

Program Manager Artificial Intelligence, European Union Aviation Safety Agency, Germany

The increasing integration of artificial intelligence into aviation introduces new opportunities for safety enhancement, operational efficiency and more advanced automation. This keynote examines the evolving regulatory and certification framework shaped by the European Union Aviation Safety Agency (EASA), whose AI Roadmap establishes a progressive and risk-based approach towards trustworthy AI deployment across the impacted aviation domains. In alignment with the EU AI Act, EASA promotes a harmonized approach grounded in robust AI assurance, human oversight and transparency, while acknowledging the need for proportionality for the diverse operational contexts of aviation.

Cassiopée Room

11:30 - Th.1.A AI Harwdare 1

Session Chair : Claire Maiza (VERIMAG)

Deploying complex Convolutional Neural Networks (CNNs) on FPGA-based accelerators is a promising way forward for safety-critical domains such as aeronautics. In a previous work, we have explored the Versatile Tensor Accelerator (VTA) and showed its suitability for avionic applications. For that, we developed an initial stand-alone compiler designed with certification in mind. However, this compiler still suffers from some limitations that are overcome in this paper. The contributions consist in extending and fully automating the VTA compilation chain to allow complete CNN compilation and support larger CNNs (which parameters do not fit in the on-chip memory). The effectiveness is demonstrated by the successful compilation and simulated execution of a YOLO-NAS object detection model.

Convolutional neural networks require a large number of MAC (multiply-accumulate) operations. To achieve reasonable time performance, they often need to be executed on specialized accelerators composed of a processing unit and on-chip memory. However, due to limited on-chip memory, their computation often needs to be split into multiple parts. In safety-critical embedded systems it is mandatory to estimate the latency to execute the models. To do so, we rely on a formal model of offloading execution behaviors. This paper formalizes the notion of strategy, where a strategy is the execution of steps to compute a CNN layer. This involves quantifying memory transactions and tracking at any time the on-chip memory footprint. We formalize two strategies and formulate an optimization problem to minimize either the on-chip memory size or the latency for computing a convolution layer.

Guillaumet 1

11:30 - Th.1.B : Worst Case Execution Time

Session Chair : Ralph Mader (SCHAEFFLER)

One crucial analysis in assessing timing constraints of real-time systems is the Worst-Case Execution Time (WCET) analysis of programs executed on computing platforms. The ever increasing complexity of computing platforms makes this type of analysis extremely cumbersome if not impossible with deterministic methods. To overcome this difficulties, statistical methods such as Extreme Value Theory (EVT) are attractive. In this paper, we propose to enhance the confidence in the WCET estimates obtained by the application of EVT-based estimators by introducing, a hybrid statistical analysis featuring such estimators. While hybrid analyses are not new, our objective is to compare several possible analyses that may capture the advantages of EVT-based estimators.

We propose preti, a novel framework for predicting software execution time in the early stages of development. preti combines static analysis and simulation to extract timing-relevant features from LLVM intermediate representation (IR), which are then used to train machine learning models. Central to our approach is a control-flow graph (CFG) alignment algorithm that establishes a structural correspondence between IR basic blocks and their associated assembly instructions. To enhance the fidelity of timing predictions, preti simulates microarchitectural behaviors—including cache accesses, branch prediction, and register spills—thereby generating execution traces that approximate real-world behavior. This design enables accurate execution time prediction without requiring native execution on embedded hardware, making preti particularly suited for early-stage performance analysis.

Guillaumet 2

11:30 - Th.1.C: Fault Tolerance

Session Chair : Frédéric Pinot (CSEE)

Modern Systems on Chip (SoCs) incorporate multiple processing units, enabling various possibilities for parallelizing and accelerating computations. These processing capabilities can be leveraged to enhance resiliency by introducing temporal and spatial redundancy. In this paper, we propose a set of extensions to the OpenMP annotations that cover both performance and resiliency in a unique and homogeneous framework. We present the implementation of these extensions and their integration in a complete toolchain, and evaluate its capabilities on an image processing Use Case.

Dual Modular Redundancy (DMR) and Triple Modular Redundancy (TMR), often combined with diversity techniques, are widely used in safety-critical systems to achieve fault detection and/or tolerance. Traditional redundancy expects bit-identical outputs, flagging any mismatch as an error. However, emerging AI-based functionalities, such as camera and LiDAR-based object detection, are intrinsically stochastic and require only semantic correctness, allowing some variation in the outputs (e.g., slightly different confidence scores). In this work, we extend our previous semantic redundancy approach – originally developed for camera-based object detection – to LiDAR-based systems, which operate on 3D point cloud data. We propose software-only DMR and TMR schemes that introduce data-level diversity through domain-specific input transformations, preserving semantic meaning while increasing robustness. Our findings demonstrate that the method can be generalized to different AI tasks, helping improve safety and reliability in the different AI components of safety-critical systems.

12:30 -14:00 - Lunch (Exposition Hall - Level 1)

Cassiopée Room

14:00 - Th.2.PO Poster overview

Session Chair : Jean-Marc Gabriel (AMPERE)

We port OpenEMS and an Apache Stream-Pipes extension service to RISC-V, validating the applications and system software stacks (Java, Linux, hypervisor and Zephyr) beneath. We describe our hands-on experience in running these payloads.

This paper will explain the current status for the reopening works for the Aerospace standard “ED-80 DO-254 Design Assurance Guidance for Airborne Electronic Hardware”. The reopening is managed with EUROCAE and RTCA, with respective Working Groups WG-128 / SC-243. The paper will explain the history of current standard, the motivations and rationale for reopening. It will explain the major stakes. The paper will show the previous works, the way of working for the Working group, the current status and the schedule.

Model-driven design has significantly enhanced the development process of functional designs, with AUTOSAR providing a respective integration platform. However, modern agile development setups pose a challenge for software architects, requiring continuous evaluation and adaptation of the diverse functional relationships among an increasing number of components. These relationships are only partially represented within the AUTOSAR API and are implemented in project and hardware specific solutions. The Logical Execution Time (LET) paradigm offers a structured approach to support agile design processes, while necessitating a coordinated effort between top-down design and bottom-up integration. In this paper we explore the requirements and propose solutions to address these challenges.

Digital twin-based control systems are increasingly playing a crucial role in many safety-critical applications across the automotive and industrial sectors to enable fail-operational architectures. These systems also play a key role in implementing predictive maintenance strategies, which ensure the availability of the system or plant under control. At the core of a digital twin is the virtual entity—a digital representation of the physical entity—that serves as the foundation for executing control and prediction decisions. The accuracy of these models directly influences the precision of the control signals generated by the digital twin and applied to the physical entity. However, over time, the model and its parameters may drift due to factors such as component aging or changes in environmental and operational conditions. If these drifts go undetected or are detected too late, they can lead to incorrect predictions and unsafe control actions, particularly in safety-critical digital twin systems. This paper explores methods for detecting such model drifts during runtime and proposes techniques for correcting them without disrupting the normal operation of the system throughout its lifecycle.

Formal methods show great promise for supporting the verification process within the avionics certification context. However, they remain intractable for high-dimensional problems. This paper summarizes the results of a CIFRE PhD thesis that paves the way for resolving this challenge.

Real-time kernels (RTK) must be developed with utmost care, as failure can put the whole system at risk. One of the classical issues that might happen during development is to mix up data accesses belonging to different memory spaces. For example, kernel code should not access user data directly. This is a classical problem that has been partly solved in general-purpose operating systems such as Linux by adding specific type annotations and dedicated static analysis to track memory accesses. However, as some specialized systems are structured around statically generated data that may be shared between kernel and userland components, the problem becomes ubiquitous. To this end, we have developed a method based on existing static analysis techniques to investigate this problem. Our contribution includes the methodology, design, and implementation of a dedicated Frama-C plugin used to track memory accesses and their associated execution context. We show our results by applying this plugin to the ASTERIOS RTK.

Giving computers the capability to “understand” natural language was a prerequisite to make them real contributors to system engineering activities. Indeed, during the early engineering phases, concepts remain fuzzy, polysemic, and requirements are generally informal. Today, thanks to Large Language Models, this condition seems to be fulfilled and we can envisage teams of collaborating humans and AI agents. In this paper, we present our workflow and its supporting tools towards that goal. We describe and illustrate our approach on the functional analysis activities of a small robot.

Model-Driven Engineering facilitates the design of embedded systems by promoting abstraction and enabling early verification of design correctness. Recent approaches have integrated Large Language Models into MDE workflows to automatically generate models from textual specifications. However, these models often require extensive prompt refinement and lack formal guarantees of correctness. This paper introduces an enhanced LLM-based generation process in TTool-AI, incorporating a novel dual feedback loop that combines automated syntactic checking with formal safety verification. The loop iteratively refines LLM-generated SysML block and state-machine diagrams to ensure syntactic validity and verify safety properties. First experimental evaluation on both academic and industrial-grade specifications demonstrates that the proposed mechanism reliably produces syntactically correct models and provides quantitative evaluation and ranking of models with respect to safety compliance, significantly reducing the effort required by engineers to obtain correct-by-construction designs.

We describe the Wayfinder Flight Laboratory to support data-driven development of Artificial Intelligence (AI) technology for aircraft autonomy. The Flight Laboratory serves as a hardware and software platform for data collection and in-the-loop testing of embedded flight functions. The crucial fuel for AI development is data of sufficient volume, diversity, and quality. Also, for safety-critical, embedded real-time systems, real-world testing is essential. The Flight Laboratory is designed to fulfill both of these requirements while also minimizing operational cost and providing an agile and flexible platform for AI development in aviation.

Cassiopée Room

15:00 - Th.3.A AI Certification 1

Session Chair : Carsten Thomas (HTW Berlin)

This paper presents the objectives and first results of the SONNX Working Group hosted by the ONNX community. SONNX aims at completing the ONNX standard and provide additional artifacts to support the development and certification of critical systems embedding ML algorithms.

While Large Language Models (LLMs) have demonstrated significant impact on numerous engineering tasks, they introduce critical risks in safety-critical domains that cannot be adequately managed through bans, restrictions, or existing standards alone. This paper analyzes the challenges and limitations of applying LLMs in safety engineering activities, emphasizing the crucial distinction between standalone LLMs and comprehensive AI systems. We outline the key risks introduced by LLMs, including issues related to data quality, model architecture, operational constraints, output bias, non-determinism, failures in long-context processing, and more. To address these risks, we propose a set of mitigation measures that go beyond organizational controls, presenting a comprehensive framework that integrates both technical and procedural safeguards. These include alignment strategies, robust input/output guardrails, hybrid AI-human workflows with explicit verification protocols, and continuous validation using domain-specific benchmarks. A concrete use case involving an enterprise-grade AI platform for automotive safety engineering, with a particular focus on a MISRA compliance assistant, is introduced, along with preliminary observations from a pilot user study. We conclude that responsible LLM deployment in safety-critical systems requires measurable safeguards, maintained human expertise in critical decision-making, and continuous validation frameworks to ensure outputs remain within acceptable risk levels as defined by safety engineering standards.

This paper argues that surrogate models can be used in legacy systems because they are provable, implementable and certifiable with respect to the future development assurance guidance for ML applications anticipated by the European Union Aviation Safety Agency (EASA).

Guillaumet 1

15:00 - Th.3.B: Interference

Session Chair : Julien Galizzi (CNES)

Embedded systems undergo a rigorous certification process to ensure the airworthiness of aircraft. The process becomes increasingly difficult as the systems become complex. Nowadays, multi-core processors (MCPs) are present in everyday equipment. Naturally, the question of their certification for aeronautical use then arises. Indeed, while they offer many advantages, they come with risks concerning the potential use of shared resources leading to interference between applications, and therefore unsafe behaviour for airworthiness. The AMC 20-193 is one of the most prominent certification standards for the use of MCP in avionic systems and equipment. It provides a set of ten objectives, going from software planning to verification of the MCPs. While these objectives enlighten the applicant on the path to certification, they are not prescriptive on the means to satisfy them. Our work aims at investigating a way to satisfy a particularly challenging objective of the AMC 20-193 dealing with software verification, named MCP Software 1, in a particular context. In this paper, we present our understanding of MCP Software 1 and suggest a strategy for its satisfaction for MCPs in our scope. We also recommend an adapted process to apply the strategy.

CAOTIC is an ambitious initiative aimed at pooling and coordinating the efforts of major French research teams working on the timing analysis of multicore real-time systems, with a focus on interference due to shared resources. The objective is to enable the efficient use of multicore in critical systems. Based on a better understanding of timing anomalies and interference, taking into account the specificities of applications and execution models, and revisiting the links between timing analysis and synthesis processes, significant progress is targeted in timing analysis models and techniques for critical systems, as well as in methodologies for their application in industry. In this paper, at project mid-term, we show the advance of the project. We also present some original work developed in the project and discuss open questions and future work.

The adoption of high-performance multi-core platforms in avionics and automotive systems introduces significant challenges in ensuring predictable execution, primarily due to shared resource interferences. Many existing approaches study interference from a single angle—for example, through hardware-level analysis or by monitoring software execution. However, no single abstraction level is sufficient on its own. Hardware behavior, program structure, and system configuration all interact, and a complete view is needed to understand where interferences come from and how to reduce them. In this paper, we present a methodology that brings together several tools that operate at different abstraction levels. At the lowest level, PHYLOG provides a formal model of the hardware and identifies possible interference channels using micro-architectural transactions. At the program level, machine learning analysis locates the exact parts of the code that are most sensitive to shared-resource contention. At the compilation level, MLIR-based transformations use this information to reshape memory access patterns and reduce pressure on shared resources. Finally, at the system level, Linux cgroups enforce static execution constraints to prevent highly interfering tasks from running together. The goal of our approach is to reduce memory interference and improve the system’s predictability, thereby easing the certification process of multi-core systems in safety-critical domains.

Guillaumet 2

15:00 - Th.3.C Certification

Session Chair : Adrien Gauffriau (AIRBUS)

Safety-critical airborne embedded processing devices must undergo certification processes according to standards that were written for custom developments and COTS devices usage. However, modern electronic devices are evolving from monolithic chips toward System in Package (SiP) architectures and from proprietary IPs toward Open-source Hardware IPs, paving the way to more tailoring, modularity and potential reuse of components across projects. This shift raises the question of whether the existing certification baseline remains adequate for such modern electronic devices and whether evolutions are required. The article aims to address this question by formalizing both the Open Hardware SiP domain and the certification baseline using ontologies, framing the adequacy between these two formal models as an ontology alignment problem and drawing conclusions from this alignment.

CompCert is the first commercially available optimizing compiler which has been formally verified. The executable code it produces is proved to behave exactly as specified by the semantics of the source C program. As a consequence, the risk of system malfunctions due to miscompilation bugs can be considered eliminated. The correctness proof of CompCert C guarantees that all safety properties verified on the source code automatically hold for the executable object code as well. In this article we outline the qualification strategy which is used at Airbus to apply CompCert in critical avionics software in compliance with DO-178C, DO-333, and DO-330. We describe the application context and illustrate the advantages compared to the traditional way of qualifying and certifying compilers that has been used in the past.

Achieving safe and harmonious Automated Driving Systems (ADSs) necessitates a comprehensive safety approach beyond the vehicle-centric standard ISO 21448:2022 SOTIF. While SOTIF addresses functional insufficiencies of onboard vehicle systems, it provides limited guidance on complex socio-technical challenges in ADS deployment, such as inadequate responses to interactive road user behaviors. This paper proposes STAMP-WA, an extension of the STAMP/STPA framework that integrates principles of the holistic Safe System approach promoted by the United Nations. STAMP-WA broadens ADS safety analysis by focusing on achieving harmony within the traffic environment through safe interaction with road users and infrastructure, and on addressing stakeholder feedback loops to ensure continuous learning and adaptation in design, operation, and policy development. By integrating these perspectives, STAMP-WA provides valuable input for evolving SOTIF and fostering dependable ADS with positive emergent properties at the transportation system level.

16:30 -17:00 - Coffee Break (Exposition Hall - Level 1)

Cassiopée Room

17:00 - Th.4.A: AI Hardware 2

Session Chair : Andres Barrilado (NXP)

Embedding DNNs in resource-constrained systems is gaining increasing interest across various domains. However, deployment remains hindered by opaque, vendor-specific tools that limit transparency, flexibility, and reproducibility issues. In this paper, we introduce Aidge, an open-source framework for the design and deployment of DNNs in constrained environments. Aidge is built around four core principles: being community-driven and dependency-free, offering a user-friendly hierarchical graph intermediate representation, enabling full traceability, and ensuring two-way interoperability with major deployment tools. These principles guide a modular architecture that allows users to analyze, optimize, and deploy models with a high degree of control and portability. By reducing reliance on black-box toolchains and supporting transparent, verifiable workflows, Aidge offers a practical and robust foundation for embedded AI development, particularly in applications requiring high assurance, such as safety-critical systems.

As the industry’s interest in machine learning has grown in recent years, some solutions have emerged to safely embed them in safety-critical systems, such as the C code generator ACETONE. However, this framework is limited to generating sequential code, which cannot make most of the multi-core architectures. In this paper, we initiate an extension of ACETONE for the generation of parallel code by formally defining our processor assignment problem and surveying the state of the art on existing solutions. In the final paper, we will introduce the completed extension, including the implementation of the scheduling heuristic, the creation of templates implementing synchronization mechanisms, and an evaluation of the worst-case execution time of the framework’s layers.

18:00 - PhD Award presentation

18:30 - Best Paper Awards - Sponsors

Guillaumet 1

17:00 - Th.4.B Real Time

Session Chair : Denis Claraz (SCHAEFFLER

Cache coherence implementing memory consistency is a transversal feature of most MPSoCs. However, cache coherence validation is challenging due to the limited observability and controllability available for at-speed testing. This is particularly problematic for safety- and security-critical domains, whose risk of failure must be proven to be residual. This paper presents a solution to simplify V&V of cache coherence implementation at hardware level by integrating a programmable traffic generator, hence overriding the limitations of regular software-based tests for at-speed testing. In particular, we build on SafeTI, an open source programmable and cycle-accurate traffic generator, to assess data consistency between the first two cache levels of a space-relevant platform based on Frontgrade Gaisler’s IPs.

In the context of modern spacecraft software, achieving functional and temporal isolation has become a key requirement for building reliable and maintainable onboard systems. Containers offer a lightweight solution for isolating software components in Linux-based platforms, but current implementations fall short when it comes to providing strong temporal guarantees and supporting diverse scheduling needs. To address these shortcomings, this paper argues that containerized flight software requires both hierarchical scheduling and multi-policy support to accommodate the heterogeneity of onboard tasks. The authors propose to extend the Compositional Scheduling Framework (CSF), leveraging the principles of the Layered Preemptive Priority (LPP) model to capture intra-container multi-policy scheduling. On the implementation side, an outline is provided to enforce temporal budgets across containers in a policy-agnostic way based on Linux’s emerging deadline server mechanism. While no implementation or analytical framework is presented at this stage, the aim is to lay the conceptual foundation for a future kernel-level scheduling architecture tailored to real-time container orchestration in space systems.

Guillaumet 2

17:00 - Th.4.C Fault Handling

Session Chair : Barbara Gallina (MDU)

As automotive Electrical/Electronic (E/E) architectures shift from distributed to zonal architecture, communication between high-performance computing nodes (HPC) and zonal computing units (ZCU) becomes a key point of failure. Any disruption may affect multiple critical functions simultaneously, requiring robust and timely fault handling strategies in accordance with safety standards. This paper proposes a communication recovery mechanism based on AUTOSAR that takes advantage of the simultaneous presence of CAN and Ethernet links between HPC and ZCU to establish a redundant communication path. In the event of a CAN failure, preconfigured routing rules enables dynamic switchover to Ethernet, preserving communication continuity. The approach is validated on a real HPC–ZCU platform, with instrumentation used to measure fault reaction times and assess communication continuity. This work lays the foundation for broader investigations, including the integration of Adaptive AUTOSAR, generalized fault scenarios, and scalable reconfiguration strategies for future centralized E/E architectures.

Autonomous fault and anomaly detection is critical for ensuring the safety and success of space missions, addressing the limitations of ground-based analysis due to bandwidth constraints and operational delays. The conventional approach in Space Operations involves using Out-of-Limits (OOL) alarms for anomaly detection, which may prove insufficient in identifying and responding to complex anomalies or unforeseen novelties within the range of nominal values. In our previous work (Voldrich, Luschykov, Harwot, & Manilla, 2023), we proposed a machine learning approach for on-board telemetry anomaly detection that addresses these limitations. We demonstrated a proof-of-concept integration of a TensorFlow model onto a radiation-tolerant LEON 3 processor and benchmarked various unsupervised and semi-supervised techniques with respect to their performance, memory footprint, and runtime. Our experiments indicated that a two-phase model comprising a Siamese convolutional encoder for dimensionality reduction followed by an outlier detector such as K-Nearest Neighbors (KNN) or Isolation Forrest is the most promising approach for our use case. We reimplemented these outlier detectors and developed a custom tool to export trained models from the PyOD library into a C++ environment for integration with our inference code. We developed separate models for each subsystem (batteries, reaction wheels, and solar panels), as this approach empirically yielded better performance. While a unified model theoretically offers advantages in capturing interconnected relationships and identifying related anomalies, it did not empirically outperform the specialized models in practice. Our recent advancements focus on bridging the gap between a proof-of-concept solution and a fully production-ready system. Mainly, we focused on development of a semi-automatic pipeline to simplify the end-to-end development and integration of these models, enabling rapid iteration once real mission data becomes available, rather than relying solely on simulators. This pipeline automates model training, big-endian conversion, and import into TensorFlow Lite For Microcontrollers (TFLM) framework for inference. Furthermore, we investigated the application of Explainable AI (XAI) and Uncertainty Quantification techniques to enhance model transparency and allow alarm rate calibration even in online settings. However, we found that the benefits of both XAI and Uncertainty Quantification were somewhat limited by our two-stage model architecture and the inherent nature of our task which relies on sequential data. This is only one of the drawbacks of our two stage approach. Another drawback was that the neural network in the first stage was trained without an explicit objective to differentiate between features of nominal and anomalous samples, which complicated the anomaly detection task in the subsequent stage. To address this, we implemented and evaluated two new models: Deep Support Vector Data (Deep SVDD) and Autoencoder with One-Class Support Vector Machine (AE-1SVM). These models integrate a neural network for dimensionality reduction with a one-class SVM for outlier detection, but critically, they are trained with a unified anomaly detection objective. This single-stage approach offers several advantages: they are unsupervised, eliminating the need for anomalous training data (though a contamination parameter is still required); and their objective functions have connected computational graphs, enabling gradient computation with respect to input data, which can be leveraged for tasks like adversarial sample generation or estimating feature importance. Both Deep SVDD, which maps data into a compact sphere in latent space, and AE-1SVM, which applies a one-class SVM in the autoencoder’s latent space, were adapted using convolutional and recurrent layers to handle our high-dimensional, time-dependent telemetry data effectively. This also creates opportunities to apply more advanced XAI techniques in the next phase of the project, which is expected to follow. First data from HERA also became available during this period. Transitioning from synthetic and historical MEX data involved training models with actual telemetry from the HERA mission, collected between October 2024 and February 2025. This move presented new challenges, including the absence of clear subsystem delineations, missing solar array data, and ambiguous variables within the chemical propulsion system. Our initial focus for HERA data was the reaction wheel subsystem, leveraging available bearing temperature, current, rotation speed, and torque measurements recorded at 16-second intervals. Rigorous data pre-processing was essential: temperature readings were excluded due to external influences, and known error-induced spikes and insufficient data segments related to rotation speed changes were carefully removed. For this HERA-specific application, we trained a single Deep SVDD model incorporating all twelve reaction wheel variables, using 36-measurement subsequences sampled every 40th interval. While the model achieved promising F1, Precision, and Recall scores (0.822, 0.853, 0.793 respectively), it initially misclassified known nominal spikes caused by commands as anomalies. To mitigate this, a unique approach was employed, iteratively retraining the model by prioritizing a percentage of samples with the highest anomaly scores from previous runs. This method significantly reduced false positives for commanded events, although it introduced considerations regarding potential data leakage due to the sequential nature of the data and shared samples across training, validation, and test sets, suggesting further improvements like oversampling or weighted sampling of spike-containing subsequences. Finally, we have significantly enhanced the robustness of our on-board deployment by developing a comprehensive patch for the TensorFlow Lite Micro (TFLM) framework. This patch, combined with a custom tool we created to manipulate the model’s flatbuffer file by swapping underlying buffers to change endianness, enables big-endian support. The custom tool specifically addresses the endianness of buffers and subgraph metadata within the Flatbuffer model, performing byte-level swaps on operator inputs/outputs and tensor shapes. The patch leverages TFLM’s memory management classes to ensure compatibility with the latest TFLM versions while maintaining strict static memory allocation, which is of great importance as the other core of the processor is dedicated to run the flight software. The patch uses the same memory pool as TFLM, making it an efficient solution that adds very little computational overhead. Additionally, the byte swapping is performed offline, so this step does not consume any resources on the target system, which is critical given its limited capacity. This approach specifically targets the LEON processor, a big-endian system, by disabling usage of reinterpret_cast for Flatbuffer vectors and instead allocating and copying data element by element, thereby resolving potential endianness and memory alignment issues. This novel integration of an ML-driven anomaly detection framework on big-endian, resource-constrained space processors represents a crucial step towards more resilient and autonomous spacecraft operations, ultimately enabling advanced on-board FDIR capabilities and reducing reliance on ground intervention.

19:00 - GALA DINNER (Salle Caravelle 2)

Friday, February 6

Cassiopée Room

08:00 -09:00 - Welcome Coffee (Exposition Hall - Level 1)

09:00 - Keynote Session

Cassiopée Room

09:00

Francisco J. Cazorla

Director of the High-Performance Embedded Systems (HPES) Laboratory, Barcelona Supercomputing Center, Spain

Critical Embedded Systems (CES) are embracing autonomy across automotive, space, avionics, and robotics, fueling the demand for performance-hungry AI software. Multi-Processors System-on-Chip (MPSoCs) offer the needed computing performance, yet their complexity, alongside AI software’s intricacies, presents significant hurdles for functional safety, particularly in software timing Verification and Validation (V&V). The core challenge stems from unpredictable resource contention amongst applications sharing MPSoC hardware, potentially leading to severe performance impacts. In this talk I explore two approaches to mitigate timing risks on complex MPSoCs. First, I will examine software-only techniques for addressing key timing V&V challenges: generating stressful scenarios for Worst-Case Execution Time (WCET) estimation, enabling WCET analysis in multi-provider environments with IP restrictions, and monitoring contention among tasks.  And second, in terms of hardware solutions, I will cover strategies to extend high-performance MPSoCs with features to enable their use in safety-relevant scenarios without impacting performance including several modules for increased observability and quota control, and modules for flexible performance testing.

Cassiopée Room

10:00 - Fr.1.A IA Use Case

Session Chair : Alexandre Albore (ONERA)

Estimating the real-time value of state variables is crucial in various industrial embedded applications, such as automotive and aeronautical systems. The most straightforward way to obtain such information is through physical sensors. However, their implementation can be both costly and complex, prompting the use of models as substitutes, often referred to as virtual sensors. In this context, artificial intelligence (AI) has recently demonstrated its suitability for designing these advanced solutions. In this work, an embedded machine learning (ML) algorithm is explored as a virtual sensor to predict the actual temperature of an automotive power electronics component, thereby ensuring effective thermal management. This methodology is proposed as an alternative to conventional modeling approaches based on simplified physical laws, which are often used in the automotive industry. The study outlines each phase, from data preprocessing to prototype testing, including feature engineering, model design, and the embedding process. Several ML models, including linear regression and neural networks, are evaluated and demonstrated to be excellent and relevant alternatives to traditional modeling in terms of both offline and online performance.

Transforming Artificial Neural Networks (ANNs) into efficient executables on resource-constrained embedded platforms is an essential step for modern AI applications. This process relies on deployment toolchains, whose growing number and features raise a significant challenge for developers. Differences among these toolchains can have critical impacts on final system performance and development cost. To address this, our work introduces a disciplined approach comprising key evaluation criteria for systematically assessing and comparing neural network deployment frameworks. We illustrate our method through an extensive comparative analysis of leading toolchains targeting diverse hardware architectures, including FPGAs, GPUs, and CPUs / MCUs. The insights and practical guidelines derived from this study are intended to facilitate navigation in the complex toolchain landscape and help to take rational design and implementation decisions.

Guillaumet 1

10:00 - Fr.1.B Logical Execution Time

Session Chair : David Lesens (ARIANE Group)

Logical execution time (LET) paradigm received substantial emphasis from the real-time community due to its many benefits, specifically its time determinism guarantees. Therefore, many programming languages have made the choice to implement this paradigm. This raises the question of which language needs to be considered. Our paper tackles this question by exploring and comparing LET-based programming languages through a dual-lens approach: (1) global framing, which consists of chronological and contextual comparisons; and (2) the extent of pattern support capabilities. Our aim is to offer some guidance to developers in selecting the most suitable programming language based on system constraints.

In this paper, we propose a method to ensure the end-to-end timing constraints of multi-rate cause-effect chains applying the LET model in systems with mode change. Upon a mode change request, our method migrates specific task instances to allow the reconfiguration of communication intervals and ensure the compliance with the latency requirements of the new execution mode.

Guillaumet 2

10:00 - Fr.1.C: Middleware and Language

Session Chair : José Ruiz (AdaCore)

The evolution of embedded automotive software is driven by increasing demands in driver assistance, perception, and data processing. To meet these challenges, Ampère is transitioning toward a Software-Defined Vehicles (SDV) architecture. This shift aims to reduce the number of Electronic Control Units, cutting system complexity, cabling, and weight—thus reducing cost and energy consumption, and enabling more powerful centralized computing for advanced tasks such as computer vision and AI. This transition challenges the traditional toolchain based on Simulink/Stateflow, which generates standard-compliant C code under a periodic execution model. While well established, this paradigm leads to inefficiencies in SDV architectures, including redundant processing and communication overloads, which currently require manual optimization at the C level. In addition, verifying safety properties is costly, and the model lacks support for modern, event-driven or asynchronous execution semantics. To align with SDV requirements, Ampère is adopting Rust, a language offering strong safety and cybersecurity guarantees at compile time. However, Rust’s low-level control and expressiveness introduce a steep learning curve for developers coming from model-based backgrounds. To bridge this gap, we propose GRust, a declarative Domain-Specific Language (DSL) for automotive system modeling. GRust combines the Synchronous paradigm—familiar to control engineers—with constructs from Functional Reactive Programming (FRP). It enables explicit modeling of temporal and data dependencies as part of system behavior, rather than relying on implicit execution semantics. GRust compiles to safe and asynchronous Rust code, well-suited to modern SDV architectures. We evaluated GRust by reimplementing existing Rust programs from Ampère’s SDV codebase and integrating them into their simulation framework. Early results show gains in development speed and code clarity while preserving performance, supporting GRust as a practical bridge between high-level modeling and Rust-based SDV development.

The evolution of Software Defined Vehicles (SDV) has led to increased performance requirements for communication middleware, particularly to support continuous feature updates across both current and future platforms. To address these requirements, the architecture of communication middleware must be tightly integrated with the underlying operating system and adhere to well-defined design patterns to ensure reliability, scalability, and real-time performance. In traditional vehicle architectures, the electrical/electronic (E/E) architecture is typically fixed for each vehicle generation. However, in the context of Software-Defined Vehicles, many software capabilities are increasingly decoupled from the underlying E/E architecture. As a result, network efficiency becomes critical to enabling the extension and upgrade of software functionalities—particularly through over-the-air updates—on existing ECU platforms. The Scalable Service-Oriented Middleware over IP (SOME/IP) protocol often serves as the backbone for vehicle Ethernet communication. We evaluated a version of the SOME/IP stack directly integrated into the network stack of QNX operating system, and compared it to the traditional approach, where a daemon is running as a standalone process. For meaningful results, we simulated a real-world use case with realistic vehicle communication. Our measurements revealed a substantial reduction in CPU usage by 50% and a very low latency. This direct integration eliminates unnecessary context switches, memory copies, and expensive IPC communication between a daemon and the network stack, significantly enhancing performance. The tight integration of the SOME/IP functionality into the QNX network stack represents an effective strategy to improve middleware performance. When combined with a multi-threaded architecture, communication middleware can manage higher volumes of network traffic while maintaining a lower impact on CPU utilization.

11:00 -11:30 - Coffee Break (Exposition Hall - Level 1)

Cassiopée Room

11:30

Delphine Dufourd-Moretti

Chief Engineer of Armament (IGA), Director of System of Systems, Direction Générale de L’Armement (DGA), France

Given the rapid evolution of threats on the battlefield, our defense systems need to adapt quickly, taking advantage of new technologies such as Artificial Intelligence (AI).

As a result, defense system engineers face many challenges including agility within engineering processes, safe integration of AI modules and semi-autonomous coordination between individual systems in order to gain mass against the enemy.

We can thus observe a paradigm shift towards more network-centric architectures with the emergence of new “systems of systems (SoS)” in every domain (land, air or sea) which exhibit new properties with respect to individual complex systems: emerging collective behaviours and extended ranges, multiple concurrent lifecycles and separate program management frameworks while preserving a form of operational independence between SoS elements.

In this context, we will analyze how embedded systems need to evolve in order to gain modularity, to better store, process and exchange large amounts of data and to ensure safe collaborative actions between platforms, based on AI.

12:30 - 14:00 - Lunch Break (Exposition Hall - Level 1)

Cassiopée Room

14:00

PANEL - Sustainability in Real-Time Embedded System

Moderator :

Pierre Louis Vernhes (Shift Projet),

Participant:

Nathalie Canis (AUMOVIO), Lionel Cordesse (IRT Saint Exupéry), Sophie Quinton (INRIA), Bénédicte Robin (CEA), Florian Simatos (ISAE)

The increasing integration of artificial intelligence into aviation introduces new opportunities for safety enhancement, operational efficiency and more advanced automation. This keynote examines the evolving regulatory and certification framework shaped by the European Union Aviation Safety Agency (EASA), whose AI Roadmap establishes a progressive and risk-based approach towards trustworthy AI deployment across the impacted aviation domains. In alignment with the EU AI Act, EASA promotes a harmonized approach grounded in robust AI assurance, human oversight and transparency, while acknowledging the need for proportionality for the diverse operational contexts of aviation.

Cassiopée Room

15:00 - Fr.4.A AI Certification 2

Session Chair : Amina Mekki-Mokhtar (ANSYS)

This paper investigates the application of artificial intelligence (AI) to create virtual sensors as cost-effective alternatives to physical sensors, specifically for temperature estimation in electric machines (e-machines), aiming to reduce system complexity and cost. It addresses the critical need for robust AI systems in high-stakes scenarios, ensuring reliability under noisy data conditions, which aligns with emerging regulatory frameworks like the EU AI Act. The study positions itself within the state-of-the-art by reviewing and comparing advanced machine learning (ML) algorithms—namely Uncertainty Quantification (UQ), Scientific Machine Learning (SML), Formal Guarantees, and Symbolic Regression—for their robustness and predictive guarantees over noisy inputs, extending beyond traditional approaches that often focus on adversarial attacks. Unlike much of the existing literature, which emphasizes AI implementation for embedded systems, this work prioritizes prediction robustness and formal guarantees, contributing to the growing field of trustworthy AI, including concepts like explainability and transparency. Using a dataset from twelve temperature sensors in an e-machine test rig, collected over 360 hours, the study trains neural network models with a consistent multi-layer perceptron architecture, optimized using Mean Squared Error (MSE) and the Adam optimizer. Robustness is assessed by introducing random noise to test inputs and evaluating metrics such as mean absolute deviation, maximum absolute deviation, and MSE. The paper provides a novel comparative analysis of these metrics alongside guarantee levels for each ML approach, offering insights into their suitability for reliable virtual sensor applications in automotive systems. This positions the work as a significant contribution to advancing robust AI-driven virtual sensing, particularly for real-time, safety-critical applications in the automotive industry.

The aviation industry operates within a highly critical and regulated context, where safety and reliability are paramount. As Natural Language Processing (NLP) systems become increasingly integrated into such domains, ensuring their trustworthiness and transparency is essential. This paper addresses the importance of explainability (XAI) in critical sectors like aviation by studying NOTAMs (Notice to Airmen), a core component of aviation communication. We provide a comprehensive overview of XAI methods applied to NLP classification tasks, proposing a categorization framework tailored to practical needs in critical applications. We also propose a new method to create aggregated explanations from local attributions. Using real-world examples, we demonstrate how XAI can uncover biases in models and datasets, leading to actionable insights for improvement. This work highlights the role of XAI in building safer and more robust NLP systems for critical sectors.

This paper addresses key challenges in the development of autonomous landing systems, focusing on dataset limitations for supervised training of Machine Learning (ML) models. Our main contributions include: (1) Enhancing dataset diversity, by advocating for the inclusion of new sources such as BingMap aerial images and Flight Simulator, to enrich an existing synthetic dataset called LARD; (2) Refining the Operational Design Domain (ODD), addressing issues like unrealistic landing scenarios and expanding coverage to multi-runway airports; (3) Benchmarking ML models for autonomous landing systems, introducing a framework for evaluating object detection subtask in a complex multi-objects setting, and providing associated open-source models as a baseline for AI models’ performance.

Guillaumet 1

15:00 - Fr.4.B Time Sensitive Network

Session Chair : Damien Chabrol (ASTERIOS Technologies)

To meet the new needs of deterministic networks, the IEEE has proposed an extension to Ethernet called Time-Sensitive Networking (TSN). Among other things, this extension makes it possible to achieve low latency performance with availability equivalent to or better than current solutions, to unify high-speed networks and real-time field buses, and is based on open, multi-industry standards. Nevertheless, this extension introduces a certain complexity (e.g. interaction between mechanisms, search for the optimal configuration, …) which needs to be studied. To achieve this goal with a low investment in time and hardware, simulation is the ideal tool. But for the results to be meaningful, we need to ensure that the simulation behavior is close to that of the hardware. So in this paper, we characterize the behavior of a TSN switch and use these characterization results to calibrate the behavior of a simulated switch.

Per-Stream Filtering and Policing (PSFP), standardized in IEEE 802.1Qci, is a mechanism for enhancing fault containment in Time-Sensitive Networking (TSN) domains. This paper examines two limitations of PSFP that challenge the prevailing “zero-fault” perception. First, the Flow Meter inside PSFP measures traffic in Service Data Unit (SDU) bytes—i.e., from the MAC destination address through Frame Check Sequence—while common Ethernet shapers such as the Credit-Based Shaper (CBS) regulate on the full “on-wire” packet length, which includes the SDU plus the 8-byte preamble and 12-byte inter-frame gap. This results in a 20-byte per-frame gap that can increase admissible rates: for minimum-size packets, a talker may exceed its contractual bandwidth by up to 30%, allowing queue build-up. Second, CBS positive-credit recovery phase permits the transmission of bursts, whose size increases with the idleSlope setting. To avoid unintended drops, the Flow Meter’s Committed Burst Size must be set proportionally higher, which can reduce the policing effect and widen the window in which faulty talkers may impact critical streams. Through deterministic analysis with RTaW-Pegase and hardware-in-the-loop experiments on an automotive TSN testbed, we quantify both effects and assess configuration-level mitigations. We then discuss possible standard-evolution options, including SDU-aware byte counting and lower-bound filters for frame sizes, that could address these gaps. The aim of this paper is to guide practitioners toward PSFP configurations that enable effective fault containment and make suggestions for PSFP improvements that could provide even stronger guarantees.

16:00 - Fr.4.B: Cybersecurity

Session Chair : Damien Chabrol (ASTERIOS Technologies)

In today’s interconnected world, the security of software systems is paramount. The increasing frequency and severity of cyberattacks underscore the need for a fundamental shift in software development. The « Secure by Design » paradigm emerges as a proactive strategy, advocating for integrating security considerations throughout the entire software development lifecycle, rather than as an afterthought. For the 2026 Embedded Real-Time Systems Conference (ERTS), AdaCore presents a research paper on « Secure by Design Principles – Fuzzing in a Memory Safe Verification Environment, » which combines CHERI ISA extensions, a security-enhanced Ada runtime, and on-target fuzz testing as means of compliance with Secure by Design principles. This paper introduces an innovative approach that utilizes a combination of memory-safe hardware and fuzz testing to enhance security verification in embedded real-time systems.

Quantum computing poses a significant threat to widely used public-key cryptographic schemes such as RSA and ECDSA. Post-quantum cryptographic (PQC) algorithms like SPHINCS+, recently standardized in Federal Information Processing Standards (FIPS) 205, offer a quantum-resistant alternative but are resource-intensive and challenging to deploy on automotive embedded systems with strict real-time constraints. In this work, we investigate the feasibility of accelerating SPHINCS+ on the Infineon Traveo II embedded platforms, specifically the 1M (CYT2B7) variant, featuring Arm Cortex-M4F with Cortex-M0+ cryptographic cores and integrated SHA-2 accelerators. By offloading key hash operations and leveraging multicore execution across the application and Hardware Security Module (HSM) cores, we achieve significant performance gains. The obtained results show that this hybrid approach can bring SPHINCS+ execution times within limits acceptable for automotive-grade embedded systems.

Guillaumet 2

15:00 - Fr.4.C Model Driven Development

Session Chair : Marie De Roquemorel (AIRBUS D&S) 

We describe a general pattern for high-performance level 2 ADAS features, which enables fast and cost-effective verification when expanding the feature in small increments. The pattern states that for higher performance levels, the ADAS feature shall fully control the driving task it is responsible for, without expecting the driver to intervene to compensate for specific circumstances. While this is a minor step from the driver’s perspective, it is essential from a safety argumentation standpoint. One major issue addressed is overtrust, as infrequent driver intervention can lead to unsafe assumptions at the fleet level. Another key advantage is enabling incremental generation of a safety case with limited additional effort for each expansion of the Operational Design Domain (ODD). The argumentation strategy remains consistent across iterations, requiring only evidence related to the ODD expansion. This extended abstract focuses on the core concepts and their impact beyond the state of practice, with detailed examples provided in the full paper.

The aerospace industry is faced with highly complex and critical operational scenarios, exemplified by aircraft ground handling, where efficiency and safety are paramount. Simulation has become crucial for managing this complexity, allowing for analysis and optimization. Critical Path Analysis (CPA) helps identify the sequence of tasks determining the overall duration. Even though operations are formalized, Model-Based Systems Engineering (MBSE) tools utilizing SysML Activity Diagrams (AD) lack sophisticated CPA and formal verification. To address these limitations, this paper proposes a model transformation approach for converting SysML ADs into Business Process Model and Notation (BPMN) diagrams. The contributions of this paper include defining mapping rules for transforming relevant SysML AD constructs to BPMN and demonstrating this transformation’s feasibility using an aircraft standing phase case study. The results show that this approach enables advanced analyses, such as stochastic CPA and formal verification, which are challenging to perform directly within the original SysML modeling environment.

The verification of safety-critical systems, though supported by various tools, remains a manually intensive process, often hindered by algorithmic limitations and inadequate support for embedding safety analysis artifacts directly within design models. Techniques like model checking require formal models, which are typically created separately from the design models. This redundant modeling effort is not only time-consuming, but also introduces the risk of semantic discrepancies between the design and safety models, necessitating significant validation. To bridge this gap, this paper proposes a modeling approach to enhance the industrial applicability of automated verification techniques. We introduce a methodology for integrating key safety artifacts—specifically, Failure Modes (FMs), Failure Conditions (FCs), and analysis results like Minimal Cut Sets (MCSs)—directly into Simulink design models. This is enabled by our Design Integrated Verification Environment (DIVE) plugin, which provides

Aller au contenu principal