Fields of research
Exploratory actions: opening up new lines of research
Exploratory actions aim to promote the emergence of new research themes. They give scientists the means to test out original ideas. These test runs can then be extended, leading to the creation of a fully-fledged Inria project-team. Presentation of the exploratory actions put in place by Inria.
Exploratory actions provide an opportunity to trust in researchers' intuition. The system allows Inria to mobilise resources to address very innovative, risky subjects that represent a departure from the institute's traditional approaches for example in artificial intelligence, digital healh or digital agriculture. It provides the means to examine a subject in detail and prove its scientific relevance: a vital stage before creating a project-team. It can also mean exploring unusual themes at the margins of Inria's sphere of action, such as subjects concerning social sciences or legal issues.
Exploratory actions provide an opportunity to trust in researchers' intuition
AI4HI : Artificial Intelligence for Human Intelligence
In an ideal educative world, each learner would have access to individual pedagogical help, tailored to its needs. For instance, a tutor who could rapidly react to the questions, and propose pedagogical contents that match the learner's kills, and who could identify and work on his or her weaknesses. However, the real world imposes constraints that make this individual pedagogical help hard to achieve.
The goal of the AI4HI project is to combine the new advances in artificial intelligence with the team's skills in compilation and teaching to aid teaching through the automated generation and recommendation of exercises to learners. In particular, we target the teaching of programming and debugging to novices. This system would propose exercises that match the learners' needs and hence improve the learning, progression, and self-confidence of learners.
- Leader : Florent Bouchez Tichadou
COML - The Cognitive Machine Learning Team
The aim of the Cognitive Computing team is to reverse engineer human learning abilities, i.e., to construct effective and scalable algorithms which perform at least as well as humans, when provided with similar data, to study their mathematical and algorithmic properties and to test their empirical validity as models of humans by comparing their output with behavioral and neuroscientific data. The expected results are more adaptable and autonomous machine learning algorithm for complex tasks, and quantitative models of cognitive processes which can used to predict human developmental and processing data.
- Leader : Emmanuel Dupoux
ELAN - ModEling the appearance of Nonlinear phenomena
ELAN has the ambition to become a unique simulation team at Inria with an original positioning across Computer Graphics and Computational Mechanics. The team is focussed on the design of predictive, robust, efficient and controllable numerical models for capturing the shape and motion of visually rich mechanical phenomena, such as the buckling of an elastic plate, the flowing of a sand pile, or the entangling of large fiber assemblies. Target applications encompass the digital entertainment industry (e.g., feature animation, special effects), as well as virtual prototyping for the mechanical
engineering industry (e.g., aircraft manufacturing, cosmetology); though very different, these two application fields require predictive and scalable models for capturing complex mechanical phenomena at the macroscopic scale. An orthogonal objective is the improvement of our understanding of natural physical and biological processes involving slender structures (such as plant growth, granular flows, DNA supercoiling), through active collaborations with soft matter physicists. To achieve its goals, the team is striving to master as finely as possible the entire modeling pipeline, involving a pluridisciplinary combination of scientific skills across Mechanics and Physics, Applied Mathematics, and Computer Science.
- Leader : Florence Bertails-Descoubes
ETHICAM : Emerging TecHnologIes for new CommunicAtion paradigMs
The evolution of the Internet of Things (IoT) towards the Internet of Everything (IoE) paradigm represents an important and emerging research direction, capable to connect and interconnect massive number of heterogeneous nodes, both inanimate and living entities, encompassing molecules, nanosensors, vehicles and people. This new paradigm demands new engineering communication solutions to overcome miniaturization and spectrum scarcity.
Novel pervasive communication paradigms will be conceived by the means of a cutting edge multidisciplinary research approach integrating (quasi) particles (e.g. phonons) and specific features of the (meta)material (e.g. chirality) in the design of the communication mechanisms.
- Leader : Valeria Loscri
KOPERNIC - Keeping wOrst case reasoning aPpropriatE foR differeNt critICALITIES
A cyber-physical system (CPS) has cyber (or computational) components and physical components that communicate. The Kopernic team deals with the problem of studying time properties (execution time of a program or the schedulability of communicating programs, etc.) of the cyber components of a CPS. The cyber components may have functions with different criticalities with respect to time and a solution should come with appropriate proofs for each criticality. A solution is appropriate for a criticality level if all functions fulfill the expectations of that criticality level.
Based on their mathematical foundations, the solutions are: either non-probabilistic when all time properties are estimated and/or bounded by numerical values or probabilistic when at least one time property is estimated and/or bounded by probability distributions.
The Kopernic team proposes a system-oriented solution to the problem of studying time properties of the cyber components of a CPS. The solution is expected to be obtained by composing probabilistic and non-probabilistic approaches for these systems.
- Leader : Liliana Cucu
MALESI : MAching LEarning for Simulation
In order to finely understand some physical, biological phenomena, numerical methods are used to solve the associated equations by computer.
For several years now, we have been able to obtain very precise methods that generate numerical pollution that can destroy the quality of the results. The aim of the project is to adapt learning and AI methods to detect and correct these pollutions. Ultimately this could lead to simulation codes that learn to correct some of their bad behaviors and optimize themselves autonomously.
- Leader : Emmanuel Franck
OptiTrust : Producing trustworthy high-performance code via source-to-source transformations
To implement a high-performance code for a numeric simulation, one needs to turn a high-level algorithm into optimized code. In order to take advantage of the numerous optimizations that are far out-of-reach for an automated compiler, the programmer needs to hand-tune the code. This manual rewriting phase is problematic on several grounds: it is time-consuming, it over-specializes the code for a given hardware, it results in code that is much harder to maintain, and, perhaps most importantly, it may introduce subtle bugs that are very hard to detect, especially in the case of parallel algorithms.
The aim of the OptiTrust project is to develop a framework for producing trustworthy high-performance code. The idea is to derive the optimized code from the high-level algorithm through a sequence of transformations, guided by the programmer. The transformation steps are recorded in a script. The script includes in particular formal statments of the properties that are exploited to justify the correctness of the transformations. The programmer gets interactive feedback on the state of code at each optimization step, and may replay the proof script after minor changes or extensions to the algorithm.
We plan to demonstrate the practicality of this approach. On the one hand, we will formally verify (in Coq) generic source-to-source transformations commonly used by HPC developers. On the other hand, we will formally verify a state-of-the-art particle-in-cell (PIC) parallel algorithm used for plasma simulations. If successful, this project will deliver the first formally-verified high-performance code for a numeric simulation.
- Leader : Arthur Charguéraud
SNIDE :Search Non neutratlIty Detection
Search engines play a key role for accessing content and are accused of biasing their results to favor their own services. This has led to the sensitive ``search neutrality debate" similar to the one on network neutrality. Our goal in this project is to construct and apply a methodology highlighting or not a bias and potentially quantifying its impact.
- Leader : Bruno Tuffin
SR4SG : Sequential collaborative learning of recommandations for sustainable gardening
Identification and sharing of good, sustainable agriculture practice is both a scientific and societal challenge. The high-scalability of recommender systems, coherently aggregating data from million of actively engaged users and constantly benefiting from research in machine learning, suggests connecting this field to sustainable agriculture may answer this challenge with significant success. The goal of the project "Sequential Recommendation for Sustainable Gardening (SR4SG)" is threefold:
1) to gather researchers in the fields of recommender systems, sequential and reinforcement learning on the one hand, and in sustainable agriculture, ecology and biodiversity preservation on the other hand, to form an ambitious mixed community working in close collaboration.
2) to create a crowdsourced platform of "participative science" to collect sequential observations and actions in everyone's garden, that will enable users to receive constantly improving recommendations involving state of the art algorithms, and researchers to organize recommendation challenges and improve their understanding of sustainable agricultural practice at large.
3) to lay the theoretical foundations of sequential learning for sustainable gardening, identify the novel bottlenecks and engage the reinforcement learning community in the process of solving them. This project funds two year of engineer, one year of postdoc, and several workshops in order to make significant progress on these three ambitious points.
- Leader : Odalric-Ambrym Maillard
TRACME: modelling a physical system
This project focuses on modelling a physical system from measurements on that system. How, starting from observations, to build a reliable model of the system dynamics? When multiple processes interact at different scales, how to obtain a significant model at each of these scales? The goal is to provide a model simple enough to bring some understanding of the system studied, but also a model elaborated enough to allow precise predictions. In order to do so, this project proposes to identify causally equivalent classes of system states, then model their evolution with a stochastic process. Renormalising these equations is necessary in order to relate the scale of the continuum to that, arbitrary, at which data are acquired. Applications primarily concern natural sciences.
- Leader : Nicolas Brodu
Exploratory actions completed
Ctrl-A: Control Techniques for Autonomic, Adaptive and Reconfigurable Computing systems
Computing systems are more and more ubiquitous, at scales from tiny embedded systems to large-scale cloud infrastructures. They are more and more adaptive and reconfigurable, for resource management, energy efficiency, or by functionality. Furthermore, these systems are increasingly complex and autonomous: their administration cannot any longer rely on a strong interaction with a human administrator. The correct design and implementation of automated control of the reconfigurations and/or their tuning is recognized as a key issue for the effectiveness of these adaptive systems.
Our objective is to build methods and tools for the design of safe controllers for autonomic, adaptive, reconfigurable computing systems. To attain this goal, we propose to combine Computer Science and Control Theory, followinf the axes corresponding to the different levels of of this co-design problem: adaptive systems infrastructures, programming support, and modeling and control techniques.
Our team groups complementary competences, from different laboratories, in order to contribute more efficiently to the topic of hardware/softxare interfaces, particularly active locally to Grenoble, and more widely nationally and internationally in the emerging community on Feedback Computing.
ESTASYS: developping brand new formal methods for Systems of Systems
Computer systems play a central role in modern societies and their errors can have dramatic consequences. Industry and academics thus invest a considerable amount of effort developing techniques to prove the correctness of these systems. Among such techniques, one finds (1) testing, the traditional approach to detect bugs with test cases, and (2) formal methods, e.g., model checking (Turing award), that can guarantee the absence of bugs. Both approaches have been largely deployed on static systems, whose behaviour is entirely known. ESTASYS focuses on developping
brand new formal methods for Systems of Systems.
FLOWERS: Baby robot learning
Can a robot learn like a baby and explore the world around it without being programmed by an engineer? This is the incredible proposition being explored by a team at Inria Bordeaux Sud-Ouest. Without imitating human intelligence in the same way as artificial intelligence, these researchers in behavioural and social robotics are trying to create a system capable of learning and developing by itself, in the same way that a child does.
Developmental psychologists have deciphered the logic behind these complex processes, based on spontaneous exploration. Implementing a "curiosity function" of this kind in robots' "brains" would allow them to learn for themselves. The team has already put this concept to the test. It is now attempting to pair this learning about the body and space with language learning, thus paving the way for autonomous social interaction of robots with humans. Such robots would be better able to cope with unknown spaces and situations. They could also be used to test the pertinence of psychologists' theories.
InBio is an interdisciplinary research group, combining wet and dry biology in the same lab.
Our main goal is to develop a comprehensive methodological framework supporting the development of a quantitative understanding of cellular processes. Given a process of interest and current knowledge on the system, the problem is to iteratively decide which strain to construct and which experiment to run to characterize the process in an optimal manner, perform the chosen experiment, and update the current knowledge on the process.
We combine systems and synthetic biology approaches with active learning and control methods, together with stochastic and statistical modeling frameworks.
InBio is an Inria / Pasteur Institute joint research group. It is hosted at Institut Pasteur and affiliated to the Lifeware team at Inria Saclay - Ile-de-France.
LICIT: An ethical approach to computer science
Information technology is everywhere: in a large number of devices, from washing machines to aeroplanes, in the RFID chips that control access to buildings, in car locking systems and, of course, in Internet systems, but also in transport cards, biometric passports and video surveillance. How can collective and individual freedoms be protected against this wave of new services and uses of information technology?
A team from Inria Grenoble - Rhône-Alpes has decided to tackle this challenge by opening up a new field of research, taking legal and ethical criteria into account when designing computer systems. Along with lawyers, they are revisiting the principles of privacy and inventing a formal framework for a data protection infrastructure. They are also proposing methods for establishing legal responsibilities in terms of software.
MUSE: Measuring networks for enhancing USer Experience
Muse stands for “Measuring networks for enhancing User Experience”. Our research is mostly in the area of network measurements. We focus on developing new algorithms and systems to improve user experience online. In particular, we are addressing two main problems of today's Internet users:
- Technology is too complex. Most Internet users are not tech-savvy and hence cannot fix performance problems and anomalous network behavior by themselves. The complexity of most Internet applications makes it hard even for networking experts to fully diagnose and fix problems. Users can't even know whether they are getting the Internet performance that they are paying their providers for.
- There is too much content. Users are often lost when deciding which articles to read or which movie to watch, for instance.
NANO-D: Virtual mock-ups on an atomic scale
Many manufactured goods, from cars to aeroplanes, are designed and tested using computers. This approach has undeniable advantages in terms of production costs and lead times. The aim of the researchers at Inria Grenoble - Rhône-Alpes is to design effective algorithmic methods to do the same on an atomic scale. Why? To model and simulate complex nanometric systems, be they natural nano-systems, such as proteins, or artificial ones, such as miniature mechanical structures.
The problem is difficult, given the large number of atoms involved as well as the duration and complexity of the phenomena to be simulated. All these barriers make such simulations too expensive. Efficient methods are therefore a very attractive proposition. In particular, researchers are developing new, adaptive approaches which automatically concentrate computing resources on the most relevant parts of the nano-systems under consideration.
STEEP: Modelling sustainable development
Making decisions about the construction of a dam, estimating the impact of an urbanisation project, choosing a waste processing technology: all these technological choices will have repercussions in terms of sustainable development. Yet local and regional authorities are cruelly lacking in tools to help them make these choices.
To address this problem, researchers at Inria Grenoble - Rhône-Alpes are exploring two new types of decision aids. The first simulates complex systems in which numerous factors, particularly human factors, interact. The objective is to anticipate the impacts of such policy choices on biodiversity and local resources… based on a variety of scenarios in respect of climate change and global economic developments. The second tool developed aims to optimise choices in terms of costs, not only from an economic point of view but also from an environmental and social perspective.
Imagine if we could control nanoscale matter in the sophisticated way we control information using computers. This form of ultimate control would lead to fancy drugs that act like molecular doctors to diagnose and cure patients and efficient chemical manufacturing processes that exploit nanoscale logical interactions. The Inria TAPDANCE team focuses on both the theory and practical implementation of such molecular computers:
- We invent new models of molecular computers and mathematically characterise their computationally power.
- We design and engineer molecular computers in the wet-lab, using DNA as a building material.
TAPDANCE is an approximate acronym for Theory and Practice of DNA Computing Engines.
L’évolution du paradigme de l’Internet des objets (IoT) vers l’ “Internet of Everything“ (IoE) représente un axe de recherche important et émergent, capable de connecter et d’interconnecter un nombre considérable de nœuds hétérogènes, inanimés et vivants, englobant des molécules, des nanocapteurs, des véhicules et les gens. Ce nouveau paradigme exige de nouvelles solutions d'ingénierie pour la communication pour surmonter la miniaturisation et l’insuffisance du spectre.
De nouveaux paradigmes de communication omniprésents seront conçus au moyen d’une approche de recherche multidisciplinaire de pointe intégrant des (quasi) particules (par exemple des phonons) et des caractéristiques spécifiques du (méta) matériau (par exemple, de la chiralité) dans la conception des mécanismes de communication.