Abstract: Evaluating First Year Interns’ Use of a Telephone Interpretation System During OSCEs

◆ Taylor Vasquez, University of Florida
◆ Easton Wollney, University of Florida
◆ Lynne Meyer, University of Florida
◆ Julia Close, University of Florida
◆ Merry Jennifer Markham, University of Florida
◆ Lou Ann Cooper, University of Florida
◆ Carolyn Stalvey, University of Florida
◆ Carma L. Bylund, University of Florida

Background
As part of their training at our institution, first year interns are required to participate in an Objective Structured Clinical Exam (OSCE) during the week prior to their program start date (OSCE 1) and again towards the end of the intern year (OSCE 2). This study analyzed the OSCE 2, particularly with an evaluation station that involved the interns speaking to a standardized patient (SP) using an interpreter.
The OSCEs are a multi-station clinical skills assessment using SPs (actors) to assess clinical skills. The standardized patients grade the interns' performance using grading rubrics. The stations are video recorded for quality assurance purposes. The results of the OSCE are formative and allow interns and their program directors to see clinical skills areas that need improvement.
Objective
The purpose of this study is to examine the communication between a medical resident and standardized patient who are using interpretation resources in the context of clinical examinations involving discharging a patient the hospital. Furthermore, we aimed to assess the ability of the resident to use interpretation resources with a patient who does not speak English.
Methods
Ninety-three recorded Objective Structured Clinical Examinations (OSCEs) videos originally recorded in March 2019, each approximately 14 minutes long, were independently coded by two communication doctoral students. Twenty eight percent of the videos were coded together before establishing intercoder reliability (Cohen’s Kappa= .83), and the rest of the videos were coded independently by the two coders. The codebook used to complete the content analysis was adapted from the standardized patient (SP) evaluation form used in the live, in-person interactions with the interns as part of an initial benchmarking assessment of interns’ clinical competencies.
Results
Eighty-three percent of interns (n = 75) correctly identified the cues the SP did not speak English and used the interpretation service. Only four percent had technical difficulties. Most (74.7%) interns addressed the patient directly and not the interpreter by using pronouns such as “you” and not “her” or “the patient” during the interaction. Interns scored highly on criteria to evaluate empathic communication using SP cues, such as addressing the patient by name (81.3%), using consistent eye contact with the patient (82.7%), and avoiding medical jargon (84%). However, interns mostly did not summarize the discussion (14.5%) nor check understanding by ask the patient to repeat back pertinent information (25.3%). Almost all interns expressed appropriate levels of understanding for the SPs’ emotions (92%) and 97% showed average or above average. By the conference we will present the full results for all 93 recorded OSCEs. Further, we will present an analysis of the difference between the outside observers’ codes and how the SPs who participated in the OSCEs rated the interns.
Conclusion
Most interns effectively used the interpretation service during the OSCEs, although there continues to be room for growth in communication skills when interacting with the SP, such as summarizing discussions and checking patient understanding.