
doi: 10.1002/mar.21813
handle: 11562/1114787 , 11568/1221483 , 11585/946494 , 2158/1303120
AbstractThe present research focuses on the interplay between two common features of the customer service chatbot experience: gaze direction and anthropomorphism. Although the dominant approach in marketing theory and practice is to make chatbots as human‐like as possible, the current study, built on the humanness‐value‐loyalty model, addresses the chain of effects through which chatbots' nonverbal behaviors affect customers' willingness to disclose personal information and purchase intentions. By means of two experiments that adopt a real chatbot in a simulated shopping environment (i.e., car rental and travel insurance), the present work allows us to understand how to reduce individuals' tendency to see conversational agents as less knowledgeable and empathetic compared with humans. The results show that warmth perceptions are affected by gaze direction, whereas competence perceptions are affected by anthropomorphism. Warmth and competence perceptions are found to be key drivers of consumers’ skepticism toward the chatbot, which, in turn, affects consumers’ trust toward the service provider hosting the chatbot, ultimately leading consumers to be more willing to disclose their personal information and to repatronize the e‐tailer in the future. Building on the Theory of Mind, our results show that perceiving competence from a chatbot makes individuals less skeptical as long as they feel they are good at detecting others’ ultimate intentions.
chatbot trust, Chatbot; Conversational Agents; Anthropomorphism; Gaze Direction; Digital Assistants; Privacy Disclosure; Artificial Intelligence; Chatbot Trust, anthropomorphism, artificial intelligence, chatbot, chatbot trust, conversational agents, digital assistants, gaze direction, privacy disclosure, digital assistants, privacy disclosure, anthropomorphism, chatbot, gaze direction, artificial intelligence, conversational agents
chatbot trust, Chatbot; Conversational Agents; Anthropomorphism; Gaze Direction; Digital Assistants; Privacy Disclosure; Artificial Intelligence; Chatbot Trust, anthropomorphism, artificial intelligence, chatbot, chatbot trust, conversational agents, digital assistants, gaze direction, privacy disclosure, digital assistants, privacy disclosure, anthropomorphism, chatbot, gaze direction, artificial intelligence, conversational agents
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 108 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 1% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 0.1% |
