Abstract: Inconsistent Reporting of Best Practices: Findings from a Health Communication Campaign Process Evaluation Systematic Review

◆ Hannah Getachew-Smith, Northwestern University
◆ Andy J. King, Iowa State University
◆ Charlotte Marshall-Fricker, Emory University
◆ Courtney L. Scherr, Northwestern University

Health communication campaign evaluation research has primarily focused on outcome (summative) evaluation to assess programmatic ‘success,’ often overlooking process evaluation, broadly defined as evaluation that describes the program, identifies barriers to successful implementation, and systematically assesses whether the campaign was implemented as planned. To standardize methods, researchers suggest evaluating campaign context, reach, dose delivered, dose received, fidelity, implementation, recruitment, barriers, and contamination during campaign process evaluation, but it is unclear if these elements are reported in the literature. This study reports a systematic review of published health campaign evaluation research to identify how best practices for conducting process evaluation are reported. The review included peer-reviewed, English-language published literature on process and implementation evaluation of health campaigns with a media component to examine process evaluation reporting and utility. Using a multi-phased process, three researchers conducted an independent screening of articles, used a consensus-based approach for article inclusion and full-text extraction, and coded articles for process evaluation best practices using checklists developed by evaluation researchers (Baranowski & Stables, 2000; Linnan & Steckler, 2002). A total of 691 unique references were identified, screened, and assessed for eligibility. After full-text examination, 46 (7%) articles published between 1987-2018 were included, the majority (n = 33; 72%) of which process evaluation was the main focus of the article. Among these, the process evaluation elements reach (n = 40; 87%), recruitment (n = 34; 74%), dose delivered (n = 28; 61%), and fidelity (n = 24; 52%) were most frequently reported, while barriers (n = 13; 28%), implementation (n = 9; 20%), and contamination (n = 8; 17%) were less frequently reported. Language used to describe these elements differed across articles as terminology (e.g., fidelity and dose received) were not explicitly stated and had to be inferred using contextual clues from descriptions of measures and results. A majority of articles described methods used to conduct process evaluation activities (n = 44; 96%) and stated process evaluation objectives (n = 40; 87%), however both were reported with varying levels of detail. Only 18 (39%) articles reported how process evaluation results informed campaign implementation strategies. Process evaluation provides insights to researchers and practitioners about mechanisms and intervening variables that could meaningfully impact interpretations of summative evaluations; however process evaluation methods and findings are less frequently included in published literature. This review demonstrates that elements considered best practices for conducting process evaluations are inconsistently reported in the literature. Campaign evaluators appear to either not follow best practice guidelines when conducting evaluations or are making the decision to omit these elements when disseminating findings, both of which are a disservice to the field. Process evaluations have practical implications for health communicators’ understanding of how, why, and whether campaigns are effective. More comprehensive and consistent dissemination of process evaluation research methods and results would help to illuminate replicable campaign implementation strategies. Based on this review, we provide recommendations for evidence-based process evaluation components to guide evaluation planning that will hopefully encourage standardized reporting of evaluation efforts.