Title: |
Achieving Inter-Rater Agreement and Inter-Rater Reliability to Assess Fidelity of an Occupation-Based Coaching (OBC) Clinical Trial Intervention. |
Authors: |
Abbott, Amy Ann, Shin, Julia, Carlson, Kathryn, Russell, Marion, Qi, Yongyue, Storm, Hannah, Jewell, Vanessa Dawn |
Source: |
British Journal of Occupational Therapy; Mar2025, Vol. 88 Issue 3, p133-141, 9p |
Subject Terms: |
FAMILIES & psychology, MEDICAL protocols, TYPE 1 diabetes, SELF-efficacy, RESEARCH funding, CRONBACH'S alpha, PILOT projects, EVALUATION of medical care, TEACHING methods, DESCRIPTIVE statistics, OCCUPATIONAL therapy, TELEMEDICINE, CAREGIVERS, CHRONIC diseases, QUALITY of life, DATA analysis software, INTER-observer reliability, VIDEO recording |
Abstract: |
Introduction: Establishing inter-rater agreement and reliability ascertains that multiple raters consistently evaluate observed interventions to ensure that clinical research protocols are delivered as intended by the trial protocol. Purpose: Using the Guidelines for Reporting Reliability and Agreement Studies, we (a) exemplified the steps to establish inter-rater reliability and inter-rater agreement on the occupation-based coaching Video Evaluation Tool and (b) evaluated best practices that promoted high inter-rater reliability and inter-rater agreement between blinded raters prior to starting a pilot randomized controlled trial. The randomized controlled trial examined the preliminary effectiveness of occupation-based coaching via telehealth for rural families with children living with type 1 diabetes to improve family quality of life, participation, self-efficacy, and child health outcomes. Method: We created a library of 13 occupation-based coaching videos portraying a range of evaluations, scores, and ratings. The inter-rater agreement and reliability on the occupation-based coaching Video Evaluation Tool were established through the iterations of (a) blinded rater training, (b) data collection using the tool, and (c) statistical analysis using Cohen's kappa and Cronbach's alpha. Findings: Occurrence and Non-Occurrence Checklist (κ = 0.881, p < 0.001); "Caregiver Talk" and "Interventionist Talk Analysis" (ICC = 0.991–0.999, p < 0.001); Evidence of Independent Capacity Rating (ICC = 0.867 p = 0.006). Conclusion: Strong inter-rater reliability and inter-rater agreement was established by engaging two blinded raters through multifaceted training, integrating real-life clients and contexts into the instrumentation and training, and precisely defined rubric criteria. By employing such practices, high inter-rater reliability and agreement can be achieved in clinical research involving interventions and instruments that are highly subjective and individualized. To ascertain greater scientific confidence in the intervention effect, developing a multidomain fidelity framework and establishing high inter-rater agreement and reliability in the instruments a priori to implementation of clinical trials are necessary. [ABSTRACT FROM AUTHOR] |
|
Copyright of British Journal of Occupational Therapy is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.) |
Database: |
Complementary Index |