Artificial intelligence

AI, building trust and ensuring sovereignty

Date:
Changed on 10/11/2021
Applications of artificial intelligence in all aspects of our lives raise new issues. What are the challenges and solutions to building trust in AI and ensuring our sovereignty in this field? Fabien Gandon, Scientific Officer at the INRIA Centre at Université Côte d’Azur and head of the Wimmics team, provides some insights.
Fabien Gandon
© Inria / Photo H. Raguet

What is artificial intelligence in your view?

I would define artificial intelligence (AI) as the automation of processes which we, as humans, see as intelligent, such as deduction, learning, reading, imagining, speaking, recognition, composition, drafting, cooperation, lying, problem-solving and exploration, etc. The latest advances in AI have added impressive capabilities to our applications, such as the capacity to predict a phenomenon and the ability to train for a certain task.

Everyone has heard about automatic learning techniques and the famous artificial neural networks, which have improved dramatically and been hugely successful in recent years, but AI is not limited to this single approach and employs a wide variety of methods. It is an integral part of broader fields of digital science, notably computer science and mathematics, and breakthroughs in these disciplines trigger further progress in AI.

How are citizens and States concerned by the applications of artificial intelligence?

Any entity (company, retail business, public authority, etc.) which produces, collects or stores data can benefit from the progress in AI, which helps to improve, analyse and interpret this data in order to make predictions and decisions. From optimising energy management by predicting consumption, to improving healthcare by learning to adapt treatment to each patient, and analysing satellite images to understand, monitor and predict environmental phenomena or agricultural activities, AI applications concern or will concern virtually all aspects of our lives, which presents both opportunities and risks in equal measure. There are two sides to the coin with any technique, and AI is no exception to this rule!

The potential vulnerability of computer systems to attacks on their algorithms which can learn from their failures has become a major strategic issue. We may also find ourselves in a vulnerable position with regard to the formidable predictive ability of algorithms, which become tools for the manipulation and potential exploitation of digital uses; anything that is predictable can potentially be manipulated, which can lead to losses of freedom.

What scientific barriers need to be overcome in order to build trust in artificial intelligence?

I believe that developing technically robust and auditable AI methods, with a documented, ethical purpose are the two key challenges at present. The design of “trustworthy” methods of artificial intelligence raises numerous scientific questions, aimed at strengthening the ability of humans to act in concert with AI. The goal of research in this area is to provide explanations concerning a conclusion, prediction or suggestion made by AI, enable the auditing of all elements of algorithmic solutions in relation to data, or the integration of mechanisms enabling users to challenge, reverse or correct the AI-generated outcomes.

Building trust in AI also supposes the development of a social, political and moral project related to the scientific projects we develop. From the design phase onwards, AI activities must be focused on human beings, their well-being and their rights, and on the characteristics required for them, i.e. robustness, security, transparency and fairness. In addition to the technical challenges I have mentioned, broader issues are also raised, such as laying down an ethical framework, training to understand the capacities and limitations of systems, regulating uses, etc.

Achieving technical robustness without neglecting ethical standards often requires us to strike a balance between opposing characteristics, which can be a challenging task! For example, improving the transparency of an algorithm requires access to computer coding, which can create security issues; detecting and correcting a prediction bias requires access to more data, which may run counter to privacy requirements, and so on.

What issues does artificial intelligence raise in terms of sovereignty and what are the major technological challenges in this area?

The outcomes of a public policy, an economic investment, a legal ruling or a defence action are likely to be disappointing in the absence of complete control, from the analysis to the implementation of the decisions that are made. If we lose control over the collection method, the necessary data, the analysis method, the predictive decision support tools, or the implementation infrastructure, we lose the means to implement our sovereignty.

This global and systematic control, which, in addition to artificial intelligence methods, includes all of its components (data and algorithms, servers, networks and terminals, applications and software, etc.) and its eco-systems (research and industry), may be difficult to achieve in terms of cost, time and critical mass, etc. It therefore seems wiser, in my view, to identify and start with the strategic and priority elements. For instance, mastering the cryptographic techniques and their integration into other processes (communication, storage, questioning, etc.) can help to ensure strategic isolation for our sovereignty and security.

Nevertheless, the need to ensure or recover critical functions must not be transformed into a form of systematic digital protectionism. Internet and the Web are useful particularly because they are global, and in AI, among many other examples, the detection and correction of biases or the generalisation of results can be highly beneficial to international exchanges and collaboration.

What questions should citizens and businesses be asking about data protection?

We now need to systematically question the collection and use of our data; we know that our data is valuable and that it can be “obtained” (with our consent) or “stolen” (without our knowledge) and not “given”! Farmers, for example, need to know that their tractors will collect data on their activity for the benefit of the manufacturer, and patients need to know that their pharmacy collects their data for private laboratories. The digital world at present, and AI more generally in the future, are part of our daily lives; each citizen is a stakeholder in this field and must also be a “watchdog”. On the one hand, this requires the dissemination of a digital culture, and on the other hand, it calls for individual and collective reflection, establishing the limits of what is or is not acceptable, and  addressing the question of “why” we authorise access to data.

For more information

Discover the new Challenges of AI