Exploring Human-Robot Relationships: Forgiveness and Accountability
Written on
Imagine a scenario where a driver, due to a grave error, causes the death of someone dear to you. The driver would face legal consequences, and you would grapple with the question of forgiveness. Now, envision this situation involving a self-driving car instead. Should it be held accountable? Can we even consider forgiveness or punishment for an automated machine?
While it may seem instinctive to dismiss such notions for robots, the rise of service robots in our daily lives raises pressing questions. The global market for service robotics was valued at $11 billion in 2018 and is anticipated to soar to $50 billion by 2024. This is a discourse humanity must engage in.
Accountability
Is a robot that commits a crime simply a faulty machine, or can it be viewed as a criminal? To address this, we must first ascertain whether robots can be held accountable for their actions.
Attributing responsibility to non-human entities is not unprecedented. Corporate law allows for the prosecution of companies, thus the notion of designating a robot as a criminal might hinge on its level of autonomy.
In a 2017 report by the European Commission of Legal Affairs, it was suggested that robots exhibiting a certain degree of autonomy should be held liable for their actions. The report differentiates between a mere tool, which acts under the commands of a human, and an autonomous entity that utilizes AI and machine learning to make independent decisions.
For the latter, the commission posited that the robot bears greater responsibility for its actions than its creator, making it accountable for any harm caused.
The concept of a liable robot challenges our traditional views, as we often attribute blame to either the manufacturer or the user. However, with advancing technology, this perspective is becoming increasingly outdated.
A 2019 study titled "Robot Criminals," published in The University of Michigan Journal of Law Reform, explored the possibility of a scenario where no human could be deemed sufficiently at fault for a robot's morally reprehensible action.
The study distinguished between ordinary robots and "smart robots," which are defined by three criteria: 1. Equipped with algorithms that can make morally significant decisions. 2. Capable of conveying their moral choices to humans. 3. Authorized to act autonomously without immediate human oversight.
Essentially, smart robots can make decisions, act upon them, and provide explanations afterward. While this may sound like science fiction, the advancement of AI makes such scenarios plausible.
In cases involving smart robots, the study concludes that responsibility lies with the robot itself.
Why consider punishing a robot? For humans, punishment serves to separate offenders from society and encourages rehabilitation. For robots, the reasoning may differ. The author of "Robot Criminals" suggests three justifications for punishment: 1. Censuring wrongful actions - This serves as a symbolic act that reinforces societal values that should not be violated. 2. Addressing emotional harm to victims - Research indicates that humans can form emotional attachments to robots. Punishing a robot could help alleviate the pain experienced by victims of robotic errors. 3. Deterrence - While the potential for smart robots to be influenced by the punishment of their peers is weak, it could motivate manufacturers to enhance safety measures.
The question of how to enact punishment on a robot is complex. Shutting down a malfunctioning robot may not suffice in cases of property damage.
The European Commission proposed several intriguing ideas: 1. Insurance for robots - Manufacturers could contribute to an insurance fund to cover damages caused by their robots. 2. Paying robots a 'wage' - As automation threatens to displace millions of jobs, compensating robots could create funds for damages and address potential issues related to social services and taxes.
The Concept of Forgiveness
The philosophical aspects of forgiving a robot were explored by Dr. Michael Nagenborg from the University of Twente. He argues that just as forgiveness is essential in human relationships, it should also play a role in human-robot interactions.
As robots become increasingly significant in our lives, the question of forgiveness alongside accountability becomes crucial.
When are we more inclined to forgive a robot? A study titled "Robots at Work," conducted by researchers from the National University of Singapore, Yale University, and Texas A&M University, sought to find out.
In one experiment at the Henn-na Hotel in Japan, guests interacted with service robots during check-in and check-out. One group was instructed to view the robots as human-like, while the other group was not. Results indicated that guests who anthropomorphized the robots were more satisfied with the service and less likely to be upset by any errors.
The second experiment involved a robotic arm with a screen. Participants chose between two snacks, and the robot, which was meant to serve the selected treat, was programmed to make an error. Participants interacted with two variants of the robot: one with a robotic voice and a blank screen, and the other with a female voice and a face on the screen.
Similar to the first experiment, participants who interacted with the human-like robot reported greater satisfaction and were less dissatisfied when the robot served the wrong snack.
This aligns with previous research indicating that individuals tend to feel more comfortable engaging with robots displaying human-like features. The preference for female voices and names in virtual assistants like Alexa and Siri supports this notion.
The discussions surrounding robot liability, punishment, and forgiveness are challenging yet essential. As machines become increasingly intelligent, humans must consider the ramifications of coexisting with them.
Robots assist in daily tasks, from cleaning to driving, but they are not infallible. In 2019, the Henn-na Hotel dismissed half of its robotic staff due to operational failures. While current errors may seem minor, future mishaps could have severe consequences, necessitating tough choices regarding accountability and forgiveness.