Security risks of social robots used to persuade and manipulate: A proof of concept study
Publication year
2020Publisher
New York, NY : Association for Computing Machinery
ISBN
9781450370578
In
HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 523-525Annotation
HRI '20: ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, UK, March 23-26, 2020)
Publication type
Article in monograph or in proceedings
Display more detailsDisplay less details
Organization
SW OZ DCC AI
Languages used
English (eng)
Book title
HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
Page start
p. 523
Page end
p. 525
Subject
Cognitive artificial intelligenceAbstract
Earlier research has shown that robots can provoke social responses in people, and that robots often elicit compliance. In this paper we discuss three proof of concept studies in which we explore the possibility of robots being hacked and taken over by others with the explicit purpose of using the robot's social capabilities. Three scenarios are explored: gaining access to secured areas, extracting sensitive and personal information, and convincing people to take unsafe action. We find that people are willing to do these tasks, and that social robots tend to be trusted, even in situations that would normally cause suspicion.
This item appears in the following Collection(s)
- Academic publications [244127]
- Electronic publications [131120]
- Faculty of Social Sciences [30028]
- Open Access publications [105157]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.