Should we “forgive” machines when they perform poorly? This is what a recent study published in Frontiers in Computer Science hopes to address as a team of researchers investigated how human operators should respond to machines when the latter make mistakes. This study has the potential to help scientists, engineers, and everyday individuals enhance human-machine collaboration, which comes as artificial intelligence (AI) continues to advance worldwide.
For the study, the researchers conducted a literature review and examined steps that can be taken to offer a certain level of “forgiveness” related to Human-Machine Interaction (HMI), also called Human-Computer Interaction (HCI). They also discussed the definition of forgiveness and the HMI relationship. They concluded that focus groups found a process for forgiving machine “behavior”, which involved human responsibility, accepting that technology is flawed, and weighing the pros and cons of using machines despite their errors.
"Our relationship with machines is no longer one-dimensional or purely technical,” said Dr. Galit Nimrod, who is a professor at Ben-Gurion University of the Negev and the study’s co-author. “We treat them like companions: we get disappointed, angry, but also forgive. In many ways, our phones, apps, and devices have become part of our social and emotional circles."
This study comes as AI continues to make headways into our daily lives, from smartphones to laptops, along with a myriad of industries, including academia, healthcare, research, and manufacturing. This is what makes HMI such an interdisciplinary field, with other examples of HMI being physical interfaces, digital interfaces, and natural user interfaces. Therefore, studies like this can demonstrate how humans can practice some patience working with AI, which could advance the HMI relationship going forward.
How will machine “forgiveness” advance the HMI relationship in the coming years and decades? Only time will tell, and this is why we science!
As always, keep doing science & keep looking up!
Sources: Frontiers in Computer Science, EurekAlert!