<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
AbstractNothing is perfect and robots can make as many mistakes as any human, which can lead to a decrease in trust in them. However, it is possible, for robots to repair a human’s trust in them after they have made mistakes through various trust repair strategies such as apologies, denials, and promises. Presently, the efficacy of these trust repairs in the human–robot interaction literature has been mixed. One reason for this might be that humans have different perceptions of a robot’s mind. For example, some repairs may be more effective when humans believe that robots are capable of experiencing emotion. Likewise, other repairs might be more effective when humans believe robots possess intentionality. A key element that determines these beliefs is mind perception. Therefore understanding how mind perception impacts trust repair may be vital to understanding trust repair in human–robot interaction. To investigate this, we conducted a study involving 400 participants recruited via Amazon Mechanical Turk to determine whether mind perception influenced the effectiveness of three distinct repair strategies. The study employed an online platform where the robot and participant worked in a warehouse to pick and load 10 boxes. The robot made three mistakes over the course of the task and employed either a promise, denial, or apology after each mistake. Participants then rated their trust in the robot before and after it made the mistake. Results of this study indicated that overall, individual differences in mind perception are vital considerations when seeking to implement effective apologies and denials between humans and robots.
Intentional Agency, Artificial intelligence, trust violations, Science, trust repair strategies, Emotions, Theory of Mind, Individuality, Social Sciences, human-robot collaboration, mind perception, Trust, trust repair, Article, denial, explainable AI, apology, human-machine communication, promise, human–robot interaction, Conscious Experience, Humans, Human-Robot Trust Repair, Information Science, robotics, expectancy violation theory, Q, R, Robotics, robot errors, work collaboration, Artificial intelligence Trust, Medicine, robot trust, warehouse, Human-Artificial intelligence Interactions
Intentional Agency, Artificial intelligence, trust violations, Science, trust repair strategies, Emotions, Theory of Mind, Individuality, Social Sciences, human-robot collaboration, mind perception, Trust, trust repair, Article, denial, explainable AI, apology, human-machine communication, promise, human–robot interaction, Conscious Experience, Humans, Human-Robot Trust Repair, Information Science, robotics, expectancy violation theory, Q, R, Robotics, robot errors, work collaboration, Artificial intelligence Trust, Medicine, robot trust, warehouse, Human-Artificial intelligence Interactions
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 18 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |