News Release

When humanlike chatbots miss the mark in customer service interactions

News from the Journal of Marketing

Peer-Reviewed Publication

American Marketing Association

Researchers from University of Oxford published a new paper in the Journal of Marketing that examines the use of chatbots in customer-service roles and finds that when customers are angry, humanlike chatbots can negatively impact customer satisfaction, overall firm evaluations, and subsequent purchase intentions.

The study, forthcoming in the Journal of Marketing, is titled “Blame the Bot: Anthropomorphism and Anger in Customer-Chatbot Interactions” and is authored by Cammy Crolic, Felipe Thomaz, Rhonda Hadi, and Andrew Stephen.

Chatbots are increasingly replacing human customer-service agents on companies’ websites, social media pages, and messaging services. Designed to mimic humans, these bots often have human names (e.g., Amazon’s Alexa), humanlike appearances (e.g., avatars), and the capability to converse like humans. The assumption is that having humanlike qualities makes chatbots more effective in customer service roles. However, this study suggests that this is not always the case. 

The research team finds that when customers are angry, deploying humanlike chatbots can negatively impact customer satisfaction, overall firm evaluation, and subsequent purchase intentions. Why? Because humanlike chatbots raise unrealistic expectations of how helpful they will be. 

The researchers conducted five experiments to better understand how humanlike chatbots impact customer service.

Study 1 analyzes nearly 35,000 chat sessions between an international mobile telecommunications company’s chatbot and its customers. Results show that when a customer was angry, the humanlike appearance of the chatbot had a negative effect on the customer’s satisfaction. 

Study 2 is a series of mock customer-service scenarios and chats where 201 participants were either neutral or angry and the chatbot was either humanlike or non-humanlike. Again, angry customers displayed lower overall satisfaction when the chatbot was humanlike than when it was not.

Study 3 demonstrates that the negative effect extends to overall company evaluations, but not when the chatbot effectively resolves the problem (i.e., meets expectations). More than 400 angry participants engaged in a simulated chat with a humanlike or non-humanlike chatbot and their problems were either effectively resolved or not during the interactions. As expected, when problems were not effectively resolved, participants reported lower evaluations of the company when they interacted with a humanlike chatbot compared to a non-humanlike one. Yet, when their problems were effectively resolved, the company evaluations were higher, with no difference based on the type of chatbot.

Study 4 is an experiment with 192 participants that provides evidence that this negative effect is driven by the increased expectations of the humanlike chatbot. People expect humanlike chatbots to be able to perform better than non-humanlike ones; but those expectations are not met, leading to reduced purchase intentions. 

Study 5 shows that explicitly lowering customer’s expectations of the humanlike chatbot prior to the chat reduces the negative response of angry customers to humanlike chatbots. When people no longer had unrealistic expectations of how helpful the humanlike chatbot would be, angry customers no longer penalized them with negative ratings. 

The researchers say that “Our findings provide a clear roadmap for how best to deploy chatbots when dealing with hostile, angry or complaining customers. It is important for marketers to carefully design chatbots and consider the context in which they are used, particularly when it comes to handling customer complaints or resolving problems.” Firms should attempt to gauge whether a customer is angry before they enter the chat (e.g., via natural language processing) and then deploy the most effective (either humanlike or not humanlike) chatbot. If the customer is not angry, assign a humanlike chatbot; but if the customer is angry, assign a non-humanlike chatbot. If this sophisticated strategy is not technically feasible, companies could assign non-humanlike chatbots in customer service situations where customers tend to angry, such as complaint centers. Or companies could downplay the capabilities of humanlike chatbots (e.g., Slack’s chatbot introduces itself by saying “I try to be helpful (But I’m still just a bot. Sorry!” or “I am not a human. Just a bot, a simple bot, with only a few tricks up my metaphorical sleeve!”). These strategies should help avoid or mitigate the lower customer satisfaction, overall firm evaluation, and subsequent purchase intentions reported by angry customers towards humanlike chatbots.

Full article and author contact information available at: https://doi.org/10.1177/00222429211045687

About the Journal of Marketing 

The Journal of Marketing develops and disseminates knowledge about real-world marketing questions useful to scholars, educators, managers, policy makers, consumers, and other societal stakeholders around the world. Published by the American Marketing Association since its founding in 1936, JM has played a significant role in shaping the content and boundaries of the marketing discipline. Christine Moorman (T. Austin Finch, Sr. Professor of Business Administration at the Fuqua School of Business, Duke University) serves as the current Editor in Chief.
https://www.ama.org/jm

About the American Marketing Association (AMA) 

As the largest chapter-based marketing association in the world, the AMA is trusted by marketing and sales professionals to help them discover what is coming next in the industry. The AMA has a community of local chapters in more than 70 cities and 350 college campuses throughout North America. The AMA is home to award-winning content, PCM® professional certification, premiere academic journals, and industry-leading training events and conferences.
https://www.ama.org


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.