Artificial intelligence

The Antidote project or explainable AI

Date:
Changed on 08/09/2022
The topic of Explainable AI is currently attracting a lot of interest in the scientific community due to the massive use of machine and deep learning algorithms in various applications. Nevertheless, the need to generate explanations understandable by humans remains an open challenge. Interview with Elena Cabrio and Serena Villata, co-coordinators of the Antidote project.

An European project

This issue is particularly important in the context of AI applications for medical diagnosis, where it is important to know how the algorithm made the decision.

For this reason, we decided to respond to the CHIST-ERA call with a project that highlighted our skills in natural language argumentation in collaboration with physicians, exploiting our results in the automatic analysis of argumentation in clinical trial abstracts on PubMed with our ACTA tool.

The Antidote project is coordinated by the Wimmics project-team, in partnership mainly with universities:

Antidote for ArgumeNtaTIon-Driven explainable artificial intelligence fOr digiTal mEdicine

A sharing of coordination and experience

In view of our ongoing collaboration on different topics in AI and natural language processing, the choice to co-coordinate this project came naturally.

This allows us to jointly address the issues and challenges that the coordination of a project of this magnitude can raise.

Elena and Serena were able to build on Elena's experience as this is the second CHIST-ERA project for her to participate in. In 2015-2018, she was the PI for the Wimmics project-team of the ALOOF project - Autonomous Learning of the Meaning of Objects - coordinated by La Sapienza University, Italy.

AI in the Antidote project

Providing high-quality explanations for AI predictions based on machine learning is a difficult and complex task.

To be effective, it requires, among other things:

  • choosing an appropriate level of generality/specificity of the explanation ;
  • refer to specific elements that contributed to the algorithm's decision;
  • using additional knowledge that can help in explaining the prediction process and selecting appropriate examples.

The goal is for the system to be able to formulate the explanation in a clearly interpretable, even convincing, manner.

Given these considerations, the Antidote project promotes an integrated view of explainable AI (XAI), where low-level features of the deep learning process are combined with higher-level patterns of human argumentation.

The Antidote project is based on three considerations:

  1. In neural architectures, the correlation between the internal states of the network (e.g., the weights of individual nodes) and the rationale for the classification result made by the network is not well studied;
  2. High quality explanations are crucial and should be based primarily on argumentation mechanisms;
  3. In real-life situations, explanation is by nature an interactive process involving an exchange between the system and the user.

A synergy with humans

Antidote will develop an Explainable AI focused on argumentation with revolutionary "integration skills" that can work synergistically with humans, explaining its results in a way that humans can trust, while taking advantage of the ability of AIs to learn from data.

Antidore will engage users in explanatory dialogues, allowing them to argue with the AI in natural language. The application area of the Antidote project aims to impact mainly medical education, to train students to provide clear explanations to justify their diagnosis.

Want to know more about the Antidote project?