Artificial intelligence

ENGAGE: towards a faster and more reliable AI in the processing of complex tasks

Date:
Publish on 15/06/2022
The fruit of a collaboration between DFKI and INRIA, the ENGAGE (nExt geNeration computinG environments for Artificial intelliGEnce) project focuses on a new generation of computing infrastructures for artificial intelligence, combining high-performance computing and Big Data.
ENGAGE
© Unsplash / Photo Uriel SC

DFKI and INRIA, active collaboration in artificial intelligence

In January 2020, INRIA and Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) organised a first workshop for their research teams to coincide with the signing of a memorandum of understanding to lead joint studies in artificial intelligence. Divided into  interest groups, the aim was to enable French and German teams to discuss and develop ideas and joint projects, with a view to creating several joint project teams, such as Moveon, MePheSTO, or IMPRESS.

Two years later, the ENGAGE project was launched. Led by Gabriel Antoniu on the INRIA side (Kerdata project team at the Rennes University INRIA Centre) and Hilko Hoffmann from DFKI, this project explores how HPC (High Performance Computing) environments can be optimised and used efficiently with other hardware environments for artificial intelligence.

“For several years now, we’ve been seeing a convergence between HPC, Big Data and artificial intelligence. We already had a solid study framework in place at INRIA, (e.g., with The HPC-Big Data Challenge, launched in 2018) enabling us to explore these topics on a European level”, says Gabriel Antoniu, INRIA senior researcher and head of the Kerdata project team. “This collaboration with DFKI was a natural move, with a majority of INRIA researchers specialised in HPC, and in AI at DFKI”, he adds. 

Develop deep neural networks for faster and more energy-efficient results

At the heart of the new Franco-German project lie deep neural networks (i.e., a set of algorithms capable of simulating human brain activity in order to process data in a complex way using advanced mathematical models), which are omnipresent across a vast range of industrial and scientific fields. The aim is to train and use these networks more rapidly for new energy-intensive computing tasks, by drawing on HPC for automatic learning in particular.

Whether for image recognition, dynamic environment detection, the transition towards a more flexible production process or the simulation of drug side-effects, deep neural networks are currently producing highly satisfying results.

However, in order to operate with high accuracy in the case of complex uses, these networks require an extremely powerful infrastructure, energy-intensive computing, and above all, large amounts of learning data. The latter is a key issue, given that in numerous cases, such data is unavailable or insufficient.

Verbatim

We usually take real data as a basis, but in certain situations this data doesn’t exist, either because the events we’re trying to model are rare, or because it’s too expensive or too difficult to set up experiments or situations which would produce real data.

Auteur

Gabriel Antoniu

Poste

co-director of the projet

This is the case in the field of healthcare and the detection of certain forms of cancer, for instance; in the self-driving vehicle sector for some particularly dangerous road traffic situations, or in industrial production, where the predicted lifespan of new machine parts requires models but real data is unavailable when the product is launched on the market.

 

Three focus areas for the Franco-German team

In order to deal with the tasks lacking real data for efficient learning, computing must turn to synthetic data. Artificial data is generated upstream, and the neural network then learns from this data.

This learning process, based on simulation data, is costly in terms of computing resources and time, and raises several questions on the reliability of AI. These issues form the basis of the first focus area of ENGAGE. “We’re exploring, for example, how to parametrise learning and at which rate, or if what we want to model can be produced correctly by the process. To ensure the quality of the model, you have to launch ensembles of simulations with different parameters”, Gabriel Antoniu explains.

This applicative aspect, led specifically by the INRIA Datamove project team, which inspired the Melissa framework, will thus enable the validation and certification of AI systems via targeted tests with data generated synthetically through simulations, the aim being to increase AI reliability and thus improve its acceptance in fields such as self-driving vehicles or industrial production.

The second focus area of ENGAGE, led among others by the INRIA Kerdata project team (Rennes), is to explore various roll-out strategies for complex artificial intelligence work flows, which involve simulations and data analysis in hybrid execution infrastructures (Cloud and Edge or Cloud, Edge and HPC).

Verbatim

Right now, there are tools for each infrastructure, but for hybrid scenarios.

Auteur

Gabriel Antoniu

The use of this digital continuum thus raises several challenges, related in particular to performance modelling and the heterogeneity of resources (varying processing abilities, constraints linked to energy use, etc.). In this context, the Kerdata team is developing a method devoted to the roll-out, monitoring and execution of large-scale experiments on a variety of relevant evolutive infrastructures. The method is formalised via E2Clab; currently in its design and development phase within the team, it will serve as a framework for the work carried out in this focus area.

The third aim of ENGAGE, led on the French side by the HiePACS (Bordeaux) team, focuses on resource management, and more specifically on the optimisation of resource use for artificial intelligence work flows via an enhanced use of parallel computing operations. To this end, the team is developing a set of methodological and algorithmic tools to manage memory and the efficient use of heterogeneous computer resources.

Over the course of three years, the Franco-German teams will work together on the roll-out of HPC for automatic learning, which may lead to future joint projects. “These partnerships are always interesting; firstly because they help us to build a network of co-workers in specific subject areas, to keep up to date with our German colleagues’ centres of interest and to broaden our own scope of interests. Secondly, they allow us to envisage future European projects”, Gabriel Antoniu concludes.