دورية أكاديمية

Interrater variation in scoring radiological discrepancies.

التفاصيل البيبلوغرافية
العنوان: Interrater variation in scoring radiological discrepancies.
المؤلفون: MUCCI, B.1 Brian.Mucci@ggc.scot.nhs.uk, MURRAY, H.2, DOWNIE, A.1, OSBORNE, K.1
المصدر: British Journal of Radiology. Aug2013, Vol. 86 Issue 1028, p1-5. 5p.
مصطلحات موضوعية: *MEDICAL radiology, *INTER-observer reliability, *RADIOLOGISTS, *MEDICAL errors, *SELF-discrepancy
مستخلص: Objective: Discrepancy meetings are an important aspect of clinical governance. The Royal College of Radiologists has published advice on how to conduct meetings, suggesting that discrepancies are scored using the scale: 0=no error, 1=minor error, 2=moderate error and 3=major error. We have noticed variation in scores attributed to individual cases by radiologists and have sought to quantify the variation in scoring atour meetings. Methods: The scores from six discrepancy meetings totalling 161 scored events were collected. The reliability of scoring was measured using Fleiss' kappa, which calculates the degree of agreement in classification. Results: The number of cases rated at the six meetings ranged from 18 to 31 (mean 27). The number of raters ranged from 11 to 16 (mean 14). Only cases where all the raters scored were included in the analysis. The Fleiss' kappa statistic ranged from 0.12 to 0.20, and mean kappa was 0.17 for the six meetings. Conclusion: A kappa of 1.0 indicates perfect agreement above chance and 0.0 indicates agreement equal to chance. A rule of thumb is that a kappa ⩾0.70 indicates adequate interrater agreement. Our mean result of 0.172 shows poor agreement between scorers. This could indicate a problem with the scoring system or may indicate a need for more formal training and agreement in how scores are applied. Advances in knowledge: Scoring of radiology discrepan- cies is highly subjective and shows poor interrater agreement. [ABSTRACT FROM AUTHOR]
قاعدة البيانات: Academic Search Index
الوصف
تدمد:00071285
DOI:10.1259/bjr.20130245