Sites Inria

Version française

Team project seminary

A compilation-like approach to real-time systems implementation

© INRIA Sophie Auvin - G comme Grille

Our work is at the frontier between real-time scheduling and compilation. Our objective is to build parallel software that respects hard real-time requirements.

  • Date : 4/12/2017
  • Place : I N R I A Paris - 2 rue Simone Iff (ou: 41 rue du Charolais) - Salle Lions - bâtiment C
  • Guest(s) : Keryan Didier

Historically, the construction of real-time systems relied on the use of very abstract models - known as task models of both software and the execution platform. Building a task model involves high abstraction overheads and is seldom, if ever, done soundly. For instance, it is common practice to add a large, fixed overhead to all task execution durations to account for all interferences from OS and other tasks. But once the task model is built, its relatively small size enabled the use of "exact" constraint solving techniques.
But this solution does not scale to the size and complexity of next-generation embedded software and hardware. For instance, abstraction costs become prohibitive when parallelizing performance-critical software (e.g. an FFT) on a many-core, whereas applying exact methods would require further model simplifications.
Based on previous work in the AOSTE team, my objective was to apply a compilation-like approach to this implementation problem.Instead of constraint solving techniques (e.g. SMT), I rely on low-complexity allocation and scheduling heuristics based on ist scheduling. The low complexity of scheduling algorithms allows the use of models of the platform and executing software that are very precise, detailing aspects such as synchronization, memory coherency, and timing. For instance, we borrow from state-of-the-art timing analysis to allow for a fine-grain accounting of time on shared-memory platforms.Synthesis covers all aspects of code generation: allocation, scheduling, the construction of thread code including synchronization and memory coherencyprimitive calls, the construction of linker scripts.
But the main result and originality of my work is that we no longer rely on some external, undocumented architecture abstraction step (e.g. add 50% to all task durations).Like in a compiler, our input includes only the source code and the real-time requirements. In particular, durations of all pieces of code in the system (tasks, synchronizations, coherency code) are automatically determined, along with their interferences.
This is only possible through:strong hypotheses on the capabilities, API and ABI of the execution platform, and on the form of input specifications precise choices on the form of the generated C code and on the way it is allocated to memory, on the way static analysis is performed, and on the way mapping is performed.The resulting compiler carefully integrates multiple tools - dataflow compiler, mapping and code generation tool, C compiler and linker, static analysis tool.

Keywords: Seminary Gallium Research

Top