Publication year
2022Number of pages
8 p.
Source
Bioethics, 36, 2, (2022), pp. 154-161ISSN
Publication type
Article / Letter to editor

Display more detailsDisplay less details
Organization
SW OZ DCC AI
Journal title
Bioethics
Volume
vol. 36
Issue
iss. 2
Languages used
English (eng)
Page start
p. 154
Page end
p. 161
Subject
Cognitive artificial intelligenceAbstract
Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor-patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) that it is also dangerous, that is, that we should not trust AI - particularly if the stakes are as high as they routinely are in medicine. In this paper, we aim to defend a notion of trust in the context of medical AI against both charges. To do so, we highlight the technically mediated intentions manifest in AI systems, rendering trust a conceptually plausible stance for dealing with them. Based on literature from human-robot interactions, psychology and sociology, we then propose a novel model to analyse notions of trust, distinguishing between three aspects: reliability, competence, and intentions. We discuss each aspect and make suggestions regarding how medical AI may become worthy of our trust.
This item appears in the following Collection(s)
- Academic publications [204994]
- Faculty of Social Sciences [27347]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.