Making AI secure on phones and connected objects

Changed on 10/03/2023
We are beginning to see artificial intelligence algorithms being deployed on embedded systems, and not just on the cloud. The problem is that they are ending up on hard drives which publishers have no control over. How can you make sure they don't get hacked or stolen? The Inria Startup Studio project SkyId is designing a development kit which companies will be able to use to encrypt their software and to obfuscate it for execution, keeping it safe from harm.
Main tenant un spartphone avec des applications socal media
© Freepik


With costs in some cases exceeding two million euros, developing artificial intelligence and teaching it data is a costly process. Algorithms are intellectual property assets of key strategic importance and can give you a competitive advantage. As such, they need to be protected against anyone who might seek to get their hands on them or to reverse engineer them at little cost. This is particularly true now given that AI is light enough to be embedded into phones and other connected objects.

After studying for a PhD in Cryptography and spending time with a number of major companies, Marie Paindavoine is now in charge of the Inria Startup Studio business project Skyld at the Rennes research centre. The aim of this project is to design a development kit that companies will be able to use to protect their AI using encryption and obfuscation.

Protecting source code

The fact that algorithms are now found on the hard drives of phones and connected objects makes this all the more pressing.

Portrait de Marie Paindavoine

Much more than with the cloud, this environment is at the mercy of attacks aimed at gaining access to the AI architecture and all of the weight associated with it.


Marie Paindavoine


Porteuse du projet d’entreprise Skyld

The solution they are working on will seek to encrypt algorithms. “If an attacker gets their hands on an algorithm, all they will get is an unintelligible sequence of zeros and ones, meaning no reverse engineering will be possible. We also protect the source code during execution. With the exception of homomorphic encryption, systems are unable to perform calculations on encrypted data. When they execute, the data is unencrypted. We use software obfuscation to ensure that algorithms cannot be extracted, even during execution.”

And that’s not all. “AI algorithms use up a lot of resources. It is becoming increasingly common for manufacturers to give them dedicated material accelerators. Next generation mobiles have ‘neural processor units’, chips that have been optimised for neural networks. It’s hard enough getting AI to work on a connected object without the security solution denying the algorithm these accelerators. Our masking will respect the computational structure and enable the software to run on these specialist chips.”

An industrial search for a proof of concept

Two engineers are currently working with Marie Paindavoine on the project. “The first is being financed by Startup Studio, while the second is being financed by the Inria researcher Teddy Furon’s chair in Artificial Intelligence in the defence sector. We are also in contact with Mohamed Sabt, a scientist who specialises in security.” Developments are ongoing aimed at bringing a technological demonstrator to maturity. “We are currently looking for a manufacturer to develop a proof of concept with us, potentially in the health sector, where a lot of the innovative algorithms that are developed are of strategic importance. This attracts attackers, which makes the need for protection all the more important.”


The stealing of algorithms can also be the first step towards more complex attacks, including the injection of adversarial examples. “These are inputs (text, sound or images) which are used to deliberately trick algorithms into making a wrong decision. If I were to put a sticker on a STOP sign, the human eye wouldn't pay it any attention, but an AI system could be tricked into thinking the sticker was a speed restriction.”

With this in mind, SkyId is also working on something else alongside its work on encryption. “There are huge challenges for these AI algorithms in terms of explainability and auditability. Imagine if I were to ask a standard software program to make a chocolate cake. I would say to it: here’s some butter, some sugar, eggs, flour and chocolate. Beat the egg whites until stiff, mix the ingredients like this, put it in the oven at such-and-such a temperature and make me a cake. This is an explainable process; you can understand what the program is doing. AI algorithms don't work like that. You give them examples and you say to them: learn how to make a chocolate cake. Work it out for yourself! The algorithm will go through all of the possibilities, trying various different directions, and exploring much more widely than is necessary. Eventually it will arrive at a chocolate cake.”

So, what’s the problem? “The problem is that it will have explored so many possibilities and produced something so enormous that it won't be possible to audit it. The decision-making process can't be understood. Three eggs or four? Who knows. And it’s those wrong steps towards the final result that make the algorithm vulnerable. In these areas attackers will be able to create disturbances with the capacity to trick it. Access to the model’s source code and parameters will make these adversarial attacks more dangerous.” But not necessarily easier to detect, hence the need to build trust. Long term, by drawing on a vast amount of research carried out in this field, the SkyId project is also seeking to devise an auditability system for AI: “a sort of formal verification of algorithms to make sure they are capable of withstanding certain types of attacks.”


Skyld project : cybersecurity for artifcial inbtelligence algorithms (french)

Miniature Podcast Marie Paindavoine
Titre du lecteur

Marie Paindavoine's podcat

Fichier audio
Audio file