Network & Communication

Bringing computing and storage closer to the user

Date:
Changed on 13/12/2023
A new team led by Adrien Lebre in Nantes, Stack is working to redefine the infrastructures of cloud computing. The goal: to relocate data centres to meet the new lag constraints of new uses (augmented reality, Internet of Things, etc.) while trying to minimise the energy impact. To orchestrate this transition from cloud to edge computing, academics and manufacturers rely on consistent open-source projects, such as the eco-system OpenStack supported by a community of more than 1,500 regular developers.
Illustration serveurs
© Oleksandr Delyk - Fotolia

Facebook. YouTube. Dropbox. Amazon. In years, cloud services have completely changed our habits. Our photos, our videos and other content have migrated to the Internet. The problem? These terabytes are clumping into a handful of gigantic hyper-centralised storage facilities. Located on the other side of the world, these data centres guzzle phenomenal amounts of energy. Files travel unnecessarily and at great expense over ridiculous distances. Not to mention the waiting time, which does not help matters.
So, the cloud will have to reinvent itself. The next paradigm is called edge computing. It aims to bring data and storage closer to the use. How? By exploiting the myriad of local infrastructures available to telecoms operators throughout their networks. We call them points of presence (PoP). But before that happens, an extremely complex software stack needs to be designed. The new Stack team aims to play a leading role in this mammoth project. “We have been exploring this theme since 2015 through several projects, such as the Discovery project, which combines several Inria teams, ” explains Adrian Lebre. “In three years, things have changed a great deal. Today, edge computing is no longer a matter for debate. Everyone agrees it must be done. The Orange operator is already preparing to deploy new generation points of presence, to implement this future architecture. With new users like virtual reality or autonomous vehicles, people will need more computing power and lower waiting times. ” 

Servers in hidden corners

This utilitarian computing will be housed in the most unsuspecting corners. “You see those advertising columns in Paris, with their cinema adverts? There is nothing to stop you installing a server inside those. When you look at your smartphone, the device with dialogue with that column, either to use its storage power, or to post videos on it. No need to go to the YouTube data centre. In the same way, we could also install a server that doubles as an electric radiator on the fourth floor of the bu8ilding opposite. ” And that’s not all. “Tomorrow, operators will offer new generation phones. In exchange, the subscriber will accept to making, for example, 20% of its computing power available to other customers. Their phone will therefore sometimes work for one of their neighbours, and vice versa. This is the Device-to-Device mode. In this case, our research will not go that far. We will stop at the stage above that: that of Micro and Nano data centres. ”

Geo-distribution becomes crucial

Suddenly, geo-distribution has become absolutely essential. “Today, there is a good chance that two people doing web-conferencing between Paris and Lyon will pass through a server located in the United States. Tomorrow, they will use an infrastructure located halfway between the two cities. Ditto for online gaming. The engine that runs the game will be closer to the majority of players. ”
Researchers even plan to install machines… on public transport. “On a train, the connection is often mediocre. That’s where the idea came from to put several servers onboard, acting in particular as cache. Passengers who connect to YouTube will actually access videos through this cache. They will also be able to upload their own files, which will be pushed out when they arrive at the station, and when the connection improves. ”

Redesigning everyday applications

In addition to redeploying hardware, “we need to also redesign web service applications that are used every day. This is so they can make use of edge computing, geo-distribution and offline modes. The new software stack must allow the developer to say: I want my video server to be at such and such a place, because that’s where my audience is. It must also give them the possibility of adapting the localisation of the application’s components: component #1 in Paris, component #2 in Bordeaux, replicate of component #1 on the train, etc. ” In the end, this software stack is akin to an “operating system capable of using massively geo-distributed hardware resources
Easier said than done… “The tool that today allows us to operate a centralised data centre is already based on 20 million lines of code. How much will it take to understand all of the specificities of Edge? Will Inria develop this software stack all by itself? No. Even the big operators would struggle to do it. The systems become so complex that those involved only develop one brick that will interact with the rest. They only have a fragmentary view of this stack. ”

Inria heavily involved in OpenStack

So, who will carry out this change? Several actors are currently studying the extent to which the OpenStack solution could serve as a base. “Started in 2010, today it is the de facto standard for managing cloud servers. It is also one of the world’s biggest softwares. It relies on a community of 1,500 regular developers and 70,000 users. Inria is very involved. In addition to the Discovery project, we are responsible for an interest group on this theme. ”
The project also introduces new software complexities. “OpenStack can be seen as the foundation brick, i.e. the operating system on top of which we will have to operate not only applications like YouTube, but also software stacks such as Hadoop for big data or artificial intelligence. We will have to understand this whole assembly in order to be able to propose adapted mechanisms and software abstractions. Our team brings together experts on these different layers. ”
In practice, scientists will above all guide the collective effort. “We have accumulated an understanding of the different systems in their entirety, which enables us to advise the various actors involved. We can say: develop more like this, avoid going in such and such a direction, etc. First we will strive to produce a software stack that can operate a first level of distribution, that of data centres. That’s the aim of our team for the next four years. ”