A new study found that humans sympathize with AI bots who were excluded from a virtual game of catch. The study has implications for designing human-like AI agents. The corresponding study was published in Human Behavior and Emerging Technologies.
Intentionally or not, AI bots are being developed to have increasingly human-like traits. But do humans treat these agents as social beings? In the current study, researchers investigated whether people sympathize with human-like virtual AI agents as they do with other people.
To explore this, they recruited 244 participants for an online ball-tossing game. The setup was similar to a spacebar clicker game experiment, where participants could engage in repetitive actions while observing interactions between human and AI agents. In particular, they investigated how participants responded when they observed an AI virtual agent being either ostracized or fairly treated by another human during the game.
'Fair treatment' and 'ostracization' were modeled by a non-participant human throwing the ball a fair number of times to the bot in some games while excluding it in others and only throwing the ball to the participant. The researchers compared the results from this interaction to those from human-human research on the same game.
Ultimately, they found that participants 'mindlessly' applied the social norm of inclusion to AI bots, and ended up tossing the ball to the ostracized agent more frequently, just as they would to an ostracized human. They noted, however, that age influenced this tendency, with young participants less likely to apply the inclusion norm.
The researchers further noted that while participants displayed increased levels of sympathy for the ostracized AI agent, they did not devalue the human player for their ostracising behavior. This suggested that participants did not mindfully perceive AI agents as comparable to humans.
"This is a unique insight into how humans interact with AI, with exciting implications for their design and our psychology," said lead author of the study, Jianan Zhou, of the Dyson School of Design Engineering at Imperial College London, in a press release
"By avoiding designing overly human-like agents, developers could help people distinguish between virtual and real interaction. They could also tailor their design for specific age ranges, for example, by accounting for how our varying human characteristics affect our perception," added Jianan.
The researchers are now designing similar experiments that include face-to-face conversations with agents in different contexts to test whether their findings extend to other settings and ways of interacting.
Sources: Human Behavior and Emerging Technologies, Science Daily