Artificial intelligence

AI security and sovereignty: INESIA unveils its roadmap for 2026-2027

Date:

Changed on 05/03/2026

A pillar of France's national artificial intelligence strategy, INESIA has reached a key milestone with the approval of its strategic priorities for the next two years. Working alongside the DGE, SGDSN, ANSSI, LNE, and PEReN, Inria is contributing its scientific expertise to build a sovereign capacity for evaluating advanced AI systems.
© taa22 - stock.adobe.com

The global landscape of artificial intelligence is evolving at an unprecedented pace, placing security and trust at the heart of sovereignty issues. It is in this context that the National Institute for AI Evaluation and Security (INESIA) has just published its roadmap for the period 2026-2027. This strategic document sets out a clear ambition: to provide France with a robust AI evaluation framework capable of ensuring safe innovation while protecting citizens.

A national ambition supported by an ecosystem of experts

Created in 2025, INESIA is the result of joint leadership between the Directorate-General for Enterprise (DGE) and the General Secretariat for Defense and National Security (SGDSN). It brings together an ecosystem of leading national players, combining the expertise of Inria, ANSSI, LNE, and PEReN, around a common goal: to support the development of artificial intelligence and accompany the economic transformation it brings about, by scientifically studying the effects of this technology, particularly in terms of security.

Three thematic areas to address the challenges of AI

The adopted roadmap, to which Inria contributes in particular through the “AI Evaluation” program led by the Digital Programs Agency, outlines INESIA's actions around three thematic areas and one cross-cutting theme:

  • Support for regulation: this area aims to develop cutting-edge technical expertise for AI regulatory authorities. INESIA will facilitate access to evaluation methods and tools, strengthen capabilities for detecting synthetic content to combat information manipulation, and contribute to the development of certification methods adapted to AI cybersecurity.
  • Systemic risks: the objective of this cluster is to deepen national expertise on the systemic risks that could be generated by the most advanced AI systems. Research will enable a better understanding of these risks and the design of appropriate mitigation methods. The study of agentic systems will provide valuable insights into this rapidly evolving field. This work will help strengthen France's commitment to the international network of AI Safety Institutes.
  • Performance and reliability: this cluster aims to stimulate innovation and creativity by encouraging emulation between players, in particular through the organization of technical challenges. In a spirit of coopetition, these initiatives will create the momentum needed to advance the state of the art.

A cross-cutting focus will provide INESIA members with common tools that will facilitate the conduct of all its work. Efforts will be made to provide the Institute with the technical resources it needs to continue its activities. Other initiatives will promote knowledge sharing within the Institute and its dissemination outside the Institute to encourage scientific exchanges on the topics of AI evaluation and safety.

Work already underway

Under the leadership of ANSSI, work is being carried out in collaboration with other members of INESIA, the Ministerial Agency for Defense Artificial Intelligence (AMIAD) and the Information Technology Security Evaluation Centers (CESTI) to design evaluation methods that will guarantee the cybersecurity of AI systems and the products that incorporate them in the future. At the same time, several structural initiatives led by INESIA members are enriching national expertise.

Viginum and PEReN have jointly conducted work on the detection of synthetic content as part of the AI Action Summit. Their results were made public in January 2025. In July 2025, Inria also organized the first INESIA Scientific Days at the Inria Center in Paris, promoting exchanges between researchers and experts on the challenges of AI evaluation and safety. Inria, LNE, and PEReN also conducted joint testing exercises on advanced AI models, published in July 2025 as part of the international network of AI Safety Institutes. These various contributions strengthen France's ability to actively participate in the development of an international evaluation framework.

Find out more: Consult the roadmap on the website of the Directorate-General for Enterprise (DGE). (French Only)