Artificial intelligence: when machines learn to forget
The work of Daniele Calandriello, a doctoral candidate on the Project Sequel team at the Inria Lille - Nord Europe research center, has just received an award from the French Association for Artificial Intelligence (AFIA). He has developed a method by which machines retain only the essential elements of very big datasets in order to make them more independent.
Human brain experts who study memory mechanisms consider that our ability to forget is just as essential to intelligence as our capacity to store data. Memorizing only important information enables us to adapt better to changing environments. This is one of the main differences between human and artificial intelligence. We are able to generalize and extrapolate from information that is sometimes only partial. In contrast, artificial intelligence generally operates using a very large amount of data from which an algorithm creates connections and graphs. Although they are very efficient in certain environments, on a very large scale these mechanisms come up against the storage capacity of our equipment. This means that programs can no longer run when they reach a certain scale.
Machines that know how to forget
For several years the Sequel* project team at the Inria Lille - Nord Europe center has been working on designing algorithms that enable computers to sort large volumes of data in order to multiply their capacity. Daniele Calandriello, who did his thesis work as a member of the project team, just received the 2018 AI Thesis prize from the French Association for Artificial Intelligence (AFIA). His work, entitled Efficient Sequential Learning in Structured and Constrained Environments, was co-supervised by two Inria research scientists from the team, Michal Valko and Alessandro Lazaric (currently on assignment at Facebook). “With his method, programs that could be applied only through a data center will now be accessible via smartphone,” explains Michal Valko.
This work applies to stochastic environments; that is, environments in which very large amounts of data are recorded as they are streamed in a random manner. This is the case of social networks, which are constantly collecting new information, as well as of facial recognition systems, which analyze the images of people as they move. Graphs generated by this data expand exponentially, and the programs that process them require a colossal calculation capability. With the method developed by Daniele Calandriello and the Sequel team, a machine can learn to sort the connections between incoming data on its own and keep only what is essential. No data are lost, but the graphs that connect them are lighter.
From marketing to medicine
This method works in a sequential way. Each time a new piece of information is recorded, the system determines the pertinence of the new elements with respect to data that has already been stored. The artificial intelligence algorithms that operate using these simplified graphs therefore process a smaller volume of information without losing efficiency, and this improves their performance. But the main advantage is reducing human intervention in the process to a minimum, making the machine more autonomous.
The method, which was formalized and proved by Daniele Calandriello, stems from the work of Michal Valko. Starting in 2009, Valko developed a facial recognition algorithm for Intel that enabled Intel machine users to lock their devices with their face. The team now envisages marketing applications in partnership with Adobe. "We are preparing an application that will be able to process all the information published worldwide on the social networks. With it, we can identify the best influencers at a given moment during a marketing, or even a political, campaign. Just imagine: Facebook represents two billion nodes. Formerly, our programs could work only with a limited dataset, such as a world region. Now, we will be able to expand the scale." This method could also be applied to medicine. "Imagine a system that could continuously record and analyze all information concerning a patient. This would be a great tool for doctors." In the field of health, Sequel is in contact with DeepMind teams.
Artificial intelligence is a priority
Daniele Calandriello is the fifth member of the Sequel team to receive the prestigious AFIA award. His work will also be presented at the International Conference on Machine Learning that will be held in Stockholm on July 10, 2018. This announcement confirms Inria's impact in this field of research and rewards its continuing efforts in recent years to prioritize artificial intelligence in team projects. Although Sequel is dedicated to basic research in artificial intelligence online, other project teams are dedicated to offline and various algorithm applications. Inria has also participated in writing the recent Villani report on artificial intelligence.
These articles could interest you:
"Une des deux meilleures thèses IA de France est à Lille" (in French) :
More on AI
- White Paper Artificial intelligence, current challenges and Inria's engagement
- © Inria / Photo G. Scagnelli Artificial Intelligence Interview with Bertrand Braunschweig
- Artificial intelligence Launch of the PRAIRIE Institute
- Birth of the DATAIA Institute The DATAIA Institute and interdisciplinarity
- Adobe Stock AI AI: an indisputable regional scientific identity
- Inria / Photo H. Raguet Artificial intelligence Grenoble is ready to welcome one of the institutes