Abstract: Effects of Interactivity and Artificial Intelligence on Judgments of Expertise and Trustworthiness in Mobile Health Technology

◆ Jacinta Tran, University of North Texas
◆ Joseph McGlynn, University of North Texas

Artificial intelligence (AI) serves an integral role in digital health technology and the promotion of comprehensive well-being. Showing the potential to revolutionize communication in the healthcare industry, devices that integrate AI provide abilities to deliver health diagnoses and minimize human discrepancies (Esmaeilzadeh, 2020). People have incorporated mobile health devices that utilize AI algorithms to generate health recommendations and guide health decision-making (Johnson et al., 2021). While AI can provide benefits to promote health, users must find the communication tools as competent and trustworthy to adopt the recommendations. Moreover, features on digital systems influence how users determine the credibility of health communication devices (Li & Sundar, 2021). Thus, questions remain on the communication attributes that increase user perceptions of credibility and trust toward AI in healthcare. Informed by the MAIN Model (Sundar, 2008), this study conducted a 2 (Agency: AI vs. Human) x 2 (Interactivity: Low vs. High) between-subject experimental design. To create the stimulus materials, we designed an application with the program JavaScript React and developed a chatbot that resembled a smartwatch that promotes healthy sleep habits. The chatbot provided tailored sleep recommendations to users. One version of the chatbot agent utilized a computer-generated voice to present as an AI entity, while a second version of the chatbot used a human voice recording. The interactivity of the chatbot varied as high or low, based on follow-up responses communicated from the smartwatch application to the user. The low interactivity condition provided a sleep recommendation to the user and asked whether the individual wanted to set a reminder. The high interactivity condition checked on how the user slept, replied to the participant based on their answers, provided a sleep recommendation, and asked if the user wanted to set a reminder. Participants were randomly assigned to one of the four conditions of the experiment. After interacting with the chatbot, participants answered questions formulated to measure their perceptions of credibility and trustworthiness toward the mobile health application. Credibility judgments differed between participants who interacted with the AI agent versus those who interacted with the chatbot that used a human voice recording. Those who engaged with the AI agent rated higher levels of expertise for the sleep health application, relative to participants who interacted with the human voice recording. However, participants in the human condition rated higher perceptions of affective trust and cognitive trust toward the application, relative to participants who interacted with the AI chatbot. In addition, participants who received the sleep recommendation from the AI chatbot with high interactivity rated increased levels of competence for the application. Results of the current study contribute to research in AI, credibility, and health communication. Theoretical implications include the identification of differences in dimensions of credibility judgments for AI chatbots versus human chatbots, suggesting distinctions in communication factors that affect perceptions of expertise and trustworthiness. The paper concludes with a discussion of how the use of AI in mobile health applications can contribute to the goal of developing communication strategies that promote comprehensive well-being and health outcomes.