Explainable AI using MAP-independence
Cham : Springer
InVejnarova, J.; Wilson, N. (ed.), Symbolic and quantitative approaches to reasoning with uncertainty: Proceedings of the 16th European Conference, ECSQARU 2021, pp. 243-254
ECSQARU 2021: 16th European Conference on Symbolic and Quantitative Approaches with Uncertainty (Prague, Czech Republic, September 21-24, 2021)
Article in monograph or in proceedings
Display more detailsDisplay less details
SW OZ DCC AI
Vejnarova, J.; Wilson, N. (ed.), Symbolic and quantitative approaches to reasoning with uncertainty: Proceedings of the 16th European Conference, ECSQARU 2021
SubjectCognitive artificial intelligence
In decision support systems the motivation and justification of the system’s diagnosis or classification is crucial for the acceptance of the system by the human user. In Bayesian networks a diagnosis or classification is typically formalized as the computation of the most probable joint value assignment to the hypothesis variables, given the observed values of the evidence variables (generally known as the MAP problem). While solving the MAP problem gives the most probable explanation of the evidence, the computation is a black box as far as the human user is concerned and it does not give additional insights that allow the user to appreciate and accept the decision. For example, a user might want to know to what extent a variable was relevant for the explanation. In this paper we introduce a new concept, MAP-independence, which tries to formally capture this notion of relevance, and explore its role towards a justification of an inference to the best explanation.
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.