Publication year
2018Publisher
Cham : Springer International Publishing
Series
Springer Series on Challenges in Machine Learning
ISBN
9783319981307
In
Escalante, H.J.; Escalera, S.; Guyon, I. (ed.), Explainable and interpretable models in computer vision and machine learning, pp. 19-36Publication type
Part of book or chapter of book
Display more detailsDisplay less details
Editor(s)
Escalante, H.J.
Escalera, S.
Guyon, I.
Baró, X.
Güçlütürk, Y.
Güçlü, U.
Gerven, M. van
Organization
SW OZ DCC AI
Languages used
English (eng)
Book title
Escalante, H.J.; Escalera, S.; Guyon, I. (ed.), Explainable and interpretable models in computer vision and machine learning
Page start
p. 19
Page end
p. 36
Subject
Springer Series on Challenges in Machine Learning; Cognitive artificial intelligence; DI-BCB_DCC_Theme 2: Perception, Action and Control; DI-BCB_DCC_Theme 4: Brain Networks and Neuronal CommunicationAbstract
Issues regarding explainable AI involve four components: users, laws and regulations, explanations and algorithms. Together these components provide a context in which explanation methods can be evaluated regarding their adequacy. The goal of this chapter is to bridge the gap between expert users and lay users. Different kinds of users are identified and their concerns revealed, relevant statements from the General Data Protection Regulation are analyzed in the context of Deep Neural Networks (DNNs), a taxonomy for the classification of existing explanation methods is introduced, and finally, the various classes of explanation methods are analyzed to verify if user concerns are justified. Overall, it is clear that (visual) explanations can be given about various aspects of the influence of the input on the output. However, it is noted that explanation methods or interfaces for lay users are missing and we speculate which criteria these methods/interfaces should satisfy. Finally it is noted that two important concerns are difficult to address with explanation methods: the concern about bias in datasets that leads to biased DNNs, as well as the suspicion about unfair outcomes.
This item appears in the following Collection(s)
- Academic publications [245131]
- Faculty of Social Sciences [30338]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.