Security risks of social robots used to persuade and manipulate: A proof of concept study
New York, NY : Association for Computing Machinery
InHRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 523-525
HRI '20: ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, UK, March 23-26, 2020)
Article in monograph or in proceedings
Display more detailsDisplay less details
SW OZ DCC AI
HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
SubjectCognitive artificial intelligence
Earlier research has shown that robots can provoke social responses in people, and that robots often elicit compliance. In this paper we discuss three proof of concept studies in which we explore the possibility of robots being hacked and taken over by others with the explicit purpose of using the robot's social capabilities. Three scenarios are explored: gaining access to secured areas, extracting sensitive and personal information, and convincing people to take unsafe action. We find that people are willing to do these tasks, and that social robots tend to be trusted, even in situations that would normally cause suspicion.
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.