The requirement of explainability is gaining more and more importance in Artificial Intelligence applications based on Machine Learning techniques, especially in those contexts where critical decisions are entrusted to software systems (think, for example, of financial and medical consultancy). In this paper, we propose an Argumentation-based methodology for explaining the results predicted by Machine Learning models. Argumentation provides frameworks that can be used to represent and analyse logical relations between pieces of information, serving as a basis for constructing human tailored rational explanations to a given problem. In particular, we use extension-based semantics to find the rationale behind a class prediction.
Arg-XAI: a Tool for Explaining Machine Learning Results
Bistarelli S.;Mancinelli A.;Santini F.;Taticchi C.
2022
Abstract
The requirement of explainability is gaining more and more importance in Artificial Intelligence applications based on Machine Learning techniques, especially in those contexts where critical decisions are entrusted to software systems (think, for example, of financial and medical consultancy). In this paper, we propose an Argumentation-based methodology for explaining the results predicted by Machine Learning models. Argumentation provides frameworks that can be used to represent and analyse logical relations between pieces of information, serving as a basis for constructing human tailored rational explanations to a given problem. In particular, we use extension-based semantics to find the rationale behind a class prediction.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.