Saving the robot or the human? Robots who feel, deserve moral care
until further notice
Number of pages
SourceSocial Cognition, 37, 1, (2019), pp. 41-56
Article / Letter to editor
Display more detailsDisplay less details
SW OZ BSI CW
SW OZ BSI SCP
SubjectBehaviour Change and Well-being; Communication and Media
Robots are becoming an integral part of society, yet our moral stance toward these non-living objects is unclear. In two experiments, we investigated whether anthropomorphic appearance and anthropomorphic attributions modulated people's utilitarian decision making about robotic agents. In Study 1, participants were presented with moral dilemmas in which the to-be-sacrificed agent was either a human, a human-like robot, or a machine-like robot. These victims were described in either neutral or anthropomorphic priming stories. Study 2 teased apart anthropomorphic attributions of agency and affect. Results indicate that although robot-like robots were sacrificed significantly more often than humans and humanlike robots, the effect of humanized priming was the same for all three agent types (Study 1), and this effect was mainly due to the attribution of affective states rather than agency (Study 2). That is, when people attribute affective states to robots, they are less likely to sacrifice them in order to save humans.
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.