April 2-4, 2020 • Hyatt Regency • Lexington, KY
Intersectionality and Interdisciplinarity in Health Communication Research
Abstract: Evaluating the Effectiveness of Chatbots in Health Communication Context
◆ Di Lun, University of Miami
◆ Wanhsiu Sunny Tsai, University of Miami
◆ Ching-Hua Chuan, University of Miami
◆ Nick Carcioppolo, University of Miami
Chatbots have been embraced as the highly anticipated next step in digital evolution across various disciplines (Dudharejia, 2017). The rich literature on web-based interactivity and emerging research on chatbots have highlighted its potential for improving various communication outcomes due to heightened interactivity, message contingency, and social presence. However, how the perceived human versus bot identity of the virtual assistant influences users’ response to the agent and the overall attitudes and experience across different communication contexts remain untheorized.
Research on health communication suggests that users may prefer communicating with an artificial intelligent agent rather than a human agent when the conversation involves embarrassing, sensitive, or stigmatized topics that may induce emotions such as embarrassment and shame (Redston, de Botte, & Smith, 2018; Romeo, 2016; Skinner, Biscope, & Poland, 2003). Moreover, in situations where users’ communications with a virtual agent are motivated by emotional venting, human agents may be preferred over chatbots.
To provide much needed empirical evidence and to evaluate the effectiveness of chatbots on enhancing user engagement, information comprehension, and overall communication experience, this study asks: whether the perceived identity of the virtual assistant as a chatbot or a human agent would make a difference in user response across different emotion-eliciting communication contexts (RQ), looking specifically at contexts in which an individual may be angry, embarrassed, or a control condition where no emotional reactions are elicited.
H1: When embarrassment is elicited, user response will be more favorable among those who perceive the virtual assistant as a chatbot.
H2: When anger is elicited, user response will be more favorable among those who perceive the virtual assistant as a human.
A 2 (Virtual assistant identity: chatbot vs. human) X 3 (Emotional elicitation: embarrassment, anger, neutral) experiment was conducted to assess the effect of different emotions on user engagement with a virtual agent on the topic of Human Papillomavirus (HPV). A total of 142 valid data were used for analysis. The results indicate that when embarrassment was elicited, participants wished they talked to a human agent (M=5.74, SD=1.195) than those who interacted with a chatbot (M=3.95, SD=2.30; t(27.074)=-3.01, p = 0.006). However, the perceived usefulness, interaction satisfaction, and attitude were not significantly different between human and bot conditions. When anger was elicited, those who talked with a bot had significantly lower interaction satisfaction than those who spoke with a human (t(50)=-2.63, p=0.011). Additionally, the qualitative analysis of the conversation content showed people are more likely to provide meaningful inputs when interacting with a human than a chatbot.
Overall, despite the increasing importance and popularity of chatbots in computer-mediated communication, people are happier to interact with a human agent than a bot. However, a chatbot has advantages of being capable of carrying out similar interactive experiences with a human agent and more time- and cost-effective.