Coopetitive AI: Fairness, Privacy, Incentives
Coopetitive AI: Fairness, Privacy, Incentives

Most of the current machine learning literature focuses on the case of a single agent (an algorithm) trying to complete some learning task based on gathered data that follows an exogenous distribution independent of the algorithm. One of the key assumptions is that this data has sufficient “regularity” for classical techniques to work. This classical paradigm of “a single agent learning on nice data”, however, is no longer adequate for many practical and crucial tasks that imply users (who own the gathered data) and/or other (learning) agents that are also trying to optimize their own objectives simultaneously, in a competitive or conflicting way. This is the case, for instance, in most learning tasks related to Internet applications (matching, content recommendation/ranking, ad auctions, etc.). Moreover, as such learning tasks rely on users’ personal data and as their outcome affect users in return, it is no longer sufficient to focus on optimizing prediction performance metrics—it becomes crucial to consider societal and ethical aspects such as fairness or privacy.

The overarching objective of FairPlay is to create algorithms that learn for and with users—and techniques to analyze them—, that is to create procedures able to perform classical learning tasks (prediction, decision, explanation) when the data is generated or provided by strategic agents, possibly in the presence of other competing learning agents, while respecting the fairness and privacy of the involved users. To that end, we naturally rely on multi-agent models where the different agents may be either agents generating or providing data, or agents learning in a way that interacts with other agents; and we put a special focus on societal and ethical aspects, in particular fairness and privacy.

Centre(s) inria
Inria Saclay Centre
In partnership with
Institut Polytechnique de Paris,Criteo


Team leader

Melanie Da Silva

Team assistant