When do we accept mistakes from chatbots? The impact of human-like communication on user experience in chatbots that make mistakes
Source
International Journal of Human-Computer Interaction, 40, 11, (2024), pp. 2862-2872ISSN
Publication type
Article / Letter to editor
Display more detailsDisplay less details
Organization
SW OZ BSI CW
Journal title
International Journal of Human-Computer Interaction
Volume
vol. 40
Issue
iss. 11
Languages used
English (eng)
Page start
p. 2862
Page end
p. 2872
Subject
Communication and MediaAbstract
Chatbots are becoming omnipresent in our daily lives. Despite rapid improvements in natural language processing in the last years, the technology behind chatbots is still not completely mature, and chatbots still make a lot of mistakes during their interactions with users. Since it is not possible to completely prevent mistakes due to technological constraints, this article aims to investigate whether a human-like communication style can reduce the negative impact of chatbots? mistakes on users. Taking a combination of the Technology Acceptance Model and the concepts of Perceived Enjoyment and Social Presence as a theoretical basis, we conducted an online experiment in which participants interacted with a chatbot and completed a survey afterwards. We found that chatbot mistakes have a negative effect on users? perceptions of Ease of Use, Usefulness, Enjoyment, and Social Presence. Human-like communication was found to be effective in reducing the negative impact of mistakes on Perceived Enjoyment. Theoretical and practical implications are discussed.
This item appears in the following Collection(s)
- Academic publications [243984]
- Electronic publications [130873]
- Faculty of Social Sciences [30023]
- Open Access publications [105042]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.