Sites Inria

Version française

Data Transparency

7/02/2017

TransAlgo: assessing the accountability and transparency of algorithmic systems

© Inria/Stéphanie Tétu

When I am searching for an itinerary on my smartphone via my favourite application, how do I know that the algorithm used is not resorting to commercial criteria in order to make me go through commercial points of interest? The aim of the TransAlgo project is to shed light on these types of practices when they are not made explicit; a project that has just awarded to Inria by Axelle Lemaire in the context of the French Law for a Digital Republic. How can methods that make it possible to verify if a decision is taken based on unacceptable criteria be developed? Nozha Boujemaa, who has been tasked with this major work, responds.

How did the TransAlgo project come about?

Following the Law for a Digital Republic, Axelle Lemaire ordered a report on regulation methods for content processing algorithms from the French General Council for the Economy (CGE).  One of the recommendations of this report was the implementation of a collaborative scientific platform aimed at, on the one hand, furthering the development of software tools and algorithm test methods and, on the other hand, promoting their use. With an internal working group (1), we proposed setting up a platform called TransAlgo for the development of the transparency and accountability of algorithmic systems, due to the duality of the data and algorithms.

Inria was given the role of TransAlgo operator, with the backing of the French National Digital Council (CNNum) and the higher education institute Institut Mines Télécom (IMT), and will be responsible for playing the role of catalyst in the scientific dynamic along with other academic partners, in particular the French National Centre for Scientific Research (CNRS).  In addition to its scientific expertise, Inria will provide support to software development. This platform will be a first in Europe.

Why be concerned with this subject?

A simple example: if a smartphone application uses my GPS data to suggest an itinerary, who verifies that the algorithm used is not resorting to commercial criteria in order to make me go through commercial points of interest? After all, that is what Google does on its search engine with the purchase of sponsored links. Another example of unfair behaviour is the volatile pricing that you can see when the price of your plane ticket increases along with your visits to an e-commerce website. The aim is not to curb innovation or new business models, but to support innovation through the informed education of the consumer, be they an individual (B2C) or a company (B2B), and through the traceability of automated decision-making. Transparency is an asset for the "empowerment" of the consumer, but also a factor in economic competitiveness.
As a result, a recent study by Inria and the French Data Protection Authority (CNIL) caught out a well-known economic player. What was involved was its mobile application, which was overstepping users' consent by communicating their GPS position despite their refusal. To be specific, the management were not aware of this and had to request an internal inquiry in order to understand the origins of the problem. The lack of program loyalty is not necessarily intentional!

There are also sorting mechanisms in search engines, proposed content recommendation and selection mechanisms that, at the present time, do not appear in a transparent manner... All of this can have impacts that most people still underestimate with regard to the granting of bank loans, insurance, recruitment situations, etc.

Real challenges, therefore, concerning information, neutrality, loyalty, fairness, non-discrimination, unfair competition, respect of consent and privacy, etc. Nonetheless, one thing that it is very important to understand is that the TransAlgo scientific platform will in no way be responsible for the regulatory control of the algorithms or the use of the data. It will propose studies, tools and services to all of the actors concerned.

Inria was given the role of TransAlgo operator and will be responsible for playing the role of catalyst in the scientific dynamic.

What are the scientific issues of TransAlgo?

The transparency of algorithmic systems is a real challenge for academic research. This calls for several subject-related skills, and many subjects identified have not yet been sufficiently explored by academic research - hence the importance of doubling the research effort. "Accountable-through-building" algorithms need to be developed, that facilitate the measure of their transparency, their explanation and the traceability of their reasoning.  An algorithm is deemed "accountable" if it respects the laws, and if it complies with certain ethical rules.

An algorithm is transparent if its "accountability" can be easily verified, for example if it opens up its code, if it makes explicit both the origin of the data it has used and the data it produces, if it explains its results, or if it publishes traces of its calculations. It should be mentioned that we will also consider situations where the code is not open since there is no obligation to divulge the code.

How are you going to proceed?

In order to be able to get to work on this, it is first necessary to define what we call transparent, neutral, loyal or fair software - notions that are rather legal in nature. This work involves verification of the compliance between its specifications and its behaviour, in other words the difference between what it is supposed to do and what it actually does. It will also shed light on its compliance with ethical and legal regulations. The methods and technical tools for the transparency of the algorithmic systems are a complex and multi-faceted topic. The properties that we wish to verify, for example non-discrimination or loyalty, involve a significant proportion of subjectivity, which depends on the uses and contexts. This makes their specification difficult. There are numerous scientific challenges, and very little research work exists on this subject.

Firstly, between now and mid-2017, we are going to establish the scientific platform in its "resource centre" phase. It will compile scientific studies, white papers and other useful resources with a shared conceptualisation of the different notions mentioned, including a discussion area with the scientific community. We hope that it will be interoperable with foreign initiatives, in particular American ones. Then we will launch the research programmes in the true sense of the term, through theses and postdoctorate research. This will enable the consolidation of a scientific community and a corpus of shared tools around these themes. One of our partners is the Data Transparency Lab, supported by, in particular, the MIT and the Mozilla foundation, of which Inria is a member of the board. Finally, the third and last phase will be the diffusion of good practices through, for example, online courses (MOOC) aimed at public authorities, industrialists and the general public.

How can you be certain of having a foothold in the real world?

The CNNum will join forces with Inria in TransAlgo in accordance with each others' missions. It will take charge of the inventory and objectivisation of the current situation regarding certain practices of the platforms by way of contributory measures (citizens and professionals). The data from the various European or international regulation sources will also enrich the resource centre.

In order to bring real uses to the fore, we also plan to collaborate with think tankssuch as the FING (Next Generation Internet Foundation), or consumer groups such as Que-Choisir, in addition to CERNA (Allistene's French advisory commission for ethics in ICT research).
We are also going to be working from expressions of requirements coming from the CNNum, the French Directorate General for Competition Policy, Consumer Affairs and Fraud Control (DGCCRF), the French Data Protection Authority (CNIL), who will be in a position to relay the issues most commonly observed by citizens, industrialists and regulatory authorities. 

[1] Daniel Le Métayer, Serge Abiteboul, Claude Castellucia

Keywords: TransAlgo Algorithm Security Data Transparency CNIL CERNA

Top