CADP, a toolbox distributed all over the world
The ship-like building of the INRIA Grenoble – Rhône-Alpes Centre in Montbonnot Saint-Martin, on the outskirts of Grenoble, houses the fifteen members of Convecs, the joint INRIA Grenoble – Rhône-Alpes and Grenoble Computer Science Laboratory project-team. It includes four INRIA researchers, a professor from Grenoble Alpes University, PhD students, post-doctorates, specialist engineers and interns who are all carrying out research on modelling and verifying parallel and inter-connected systems.
Their work led to the creation of the CADP (for "Construction and Analysis of Distributed Processes") toolbox. This modular platform, launched in 1986, contains some fifty tools that have already had a significant impact in research and teaching, leading to more than 200 publications and over 100 research prototypes. Licences have been purchased by several companies inclduing Airbus, Google, Bull, STMicroelectronics and Tiempo Secure, as well as industrial research centres such as CEA-Leti and Saint-Exupéry TRI. “We also work, for example, with car manufacturers, mechatronic components specialists (e.g., Crouzet) and key players in the telecom and cloud sectors (Orange and Nokia Bell Labs)”, says Wendelin Serwe, a researcher on the Convecs team.
Increasingly complex information systems
“Our research at Convecs focuses on modelling and verifying asynchronous parallel systems”, explains Radu Mateescu, head of the project-team initiated in 2012 in the wake of a previous project-team named Vasy. “An asynchronous parallel system relies on several agents (or processes) which mainly work independently, except when they need to synchronise or communicate via messages.”
These systems owe their complexity to the fact that such interactions are difficult to predict (which is referred to as “non-determinism”). This complexity increases as the systems evolve with technical progress. “The first computers worked with a single processor, which executed line by line what the programme imposed”, explains researcher Hubert Garavel. “However, improvements in performance led to the use of several processors and programmes capable of functioning at the same time, which caused problems in synchronisation and communication.” The various tasks must be organised correctly to avoid blockages and data corruption.
The semantics of parallelism, a rich but poorly understood theory
“Parallelism is a universal concept”, Hubert Garavel explains. “We find it in many human activities, more specifically in the management of organisations and work groups, for which we need to implement consultation and planning mechanisms to improve the coordination of each person’s tasks and avoid blockages or incoherent situations. It can also be found in telephones and laptop computers, which now contain several cores and co-processors in order to simultaneously execute very diverse actions, such as user tasks, operating system activities, display and network communications, etc. Parallelism is also found in connected objects and smart buildings.” With supercomputers, scientists talk of “massive parallelism”, because they work with tens or hundreds of thousands of different processors.
Methods within the reach of industrial players
Concurrency theory proposes general models to describe interactions, predict their behaviour and verify their correction. It combines the research results of the great pioneers of computer science, several of whom have been awarded the Turing Prize. It is on this rich but daunting body of theory that the work of the Convecs project-team is based. “We endeavour to place these methods within the reach of industrial players by providing them with languages and tools that can be learnt and mastered without having advanced knowledge in mathematical equations”, says Radu Mateescu. “The idea is to describe the workings of asynchronous parallel systems in an intelligible form for humans, while maintaining sufficient rigour for it to be correctly understood by machines.”
Verification can then begin, with the help of compositional verification tools, which enable the most complex processes to be broken down into simpler fragments for analysis. One of the Convecs team’s major aims is “to test and validate the majority of behaviour scenarios upstream, before the systems are rolled out on the ground”, Convecs team researcher Frédéric Lang concludes.