An European project
This issue is particularly important in the context of AI applications for medical diagnosis, where it is important to know how the algorithm made the decision.
For this reason, we decided to respond to the CHIST-ERA call with a project that highlighted our skills in natural language argumentation in collaboration with physicians, exploiting our results in the automatic analysis of argumentation in clinical trial abstracts on PubMed with our ACTA tool.
The Antidote project is coordinated by the Wimmics project-team, in partnership mainly with universities:
- Université Côte d'Azur
- Antoine Lacassagne Center
- Fondazione Bruno Kessler, Trento, Italy
- University of the Basque Country, Spain
- KU Leuven, Belgium
- Universidade Nova de Lisboa, Portugal
A sharing of coordination and experience
In view of our ongoing collaboration on different topics in AI and natural language processing, the choice to co-coordinate this project came naturally.
This allows us to jointly address the issues and challenges that the coordination of a project of this magnitude can raise.
AI in the Antidote project
Providing high-quality explanations for AI predictions based on machine learning is a difficult and complex task.
To be effective, it requires, among other things:
- choosing an appropriate level of generality/specificity of the explanation ;
- refer to specific elements that contributed to the algorithm's decision;
- using additional knowledge that can help in explaining the prediction process and selecting appropriate examples.
The goal is for the system to be able to formulate the explanation in a clearly interpretable, even convincing, manner.
Given these considerations, the Antidote project promotes an integrated view of explainable AI (XAI), where low-level features of the deep learning process are combined with higher-level patterns of human argumentation.
The Antidote project is based on three considerations:
- In neural architectures, the correlation between the internal states of the network (e.g., the weights of individual nodes) and the rationale for the classification result made by the network is not well studied;
- High quality explanations are crucial and should be based primarily on argumentation mechanisms;
- In real-life situations, explanation is by nature an interactive process involving an exchange between the system and the user.
A synergy with humans
Antidote will develop an Explainable AI focused on argumentation with revolutionary "integration skills" that can work synergistically with humans, explaining its results in a way that humans can trust, while taking advantage of the ability of AIs to learn from data.
Antidore will engage users in explanatory dialogues, allowing them to argue with the AI in natural language. The application area of the Antidote project aims to impact mainly medical education, to train students to provide clear explanations to justify their diagnosis.