To understand the digital infrastructures of tomorrow!
Arnaud Legrand has been head of the Polaris team at the Inria centre in Grenoble for the last few months. Here he reviews the activity of this young team, specialised in large computing structures - a fast-growing field.
Arnaud Legrand: At Polaris, we are specialised in the modelling of large computing structures. For example supercomputers, cloud computing, but also 3/4/5G networks or smart grids.
All of these infrastructures share common characteristics: they are immense (hundreds of thousands to billions of entities), heterogeneous, and quite unpredictable. They are also the place for interactions with human users having specific behaviours: they will adapt to the system, cooperate (or not) between themselves, whereas the infrastructure itself will try to learn and anticipate these behaviours in order to better respond to them.
These platforms are therefore extremely complex, and to successfully model them represents a real challenge! To do this, we take our inspiration from techniques traditionally used in economics (game theory) or statistical physics (Markov processes, mean fields) and are developing measurement, analysis, simulation and optimisation tools specifically adapted to the context of these large infrastructures.
Our ultimate aim is to develop tools and abstractions enabling the understanding of such systems.
We are carrying out fundamental research in connection with numerous other fields. Our goal is not to work on one particular application, but rather to see what all of these systems have in common.
Ultimately, this will enable the designers to optimise these platforms. One of the aims of our work on predictive simulation is to enable designers to "test" their infrastructures or applications depending on the network or processor technologies they are going to use. As a result they will be able to predict energy efficiency, performance or any potential problems.
A need for efficient mathematical tools
It is a major challenge for the coming years since these large infrastructures will continue to develop and will become omnipresent: smart grids, cloud, IoT, etc. And yet, if we do not understand how these systems work, we will not be able to optimise them. We will have infrastructures with billions of entities that will continuously exchange information, the majority of which are incomplete, obsolete...We need to have real mathematical tools in order to analyse them and understand how they work.