Yeli Pi1, Mitchell P Wilson2, Prayash Katlariwala2, Medica Sam2, Thomas Ackerman2, Lee Paskar2, Vimal Patel2, Gavin Low2. 1. Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada. pi@uaberta.ca. 2. Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada.
Abstract
PURPOSE: The objective of this study is to evaluate the diagnostic accuracy, interobserver variability, and common lexicon pitfalls of the ACR O-RADS scoring system among staff radiologists without prior experience to O-RADS. MATERIALS AND METHODS: After independent review of the ACR O-RADS publications and 30 training cases, three fellowship-trained, board-certified staff radiologists scored 50 pelvic ultrasound exams using the O-RADS system. The diagnostic accuracy and area under receiver operating characteristic were analyzed for each reader. Overall agreement and pair-wise agreement between readers were also analyzed. RESULTS: Excellent specificities (92 to 100%), NPVs (92 to 100%), and variable sensitivities (72 to 100%), PPVs (66 to 100%) were observed. Considering O-RADS 4 and O-RADS 5 as predictors of malignancy, individual reader AUC values range from 0.94 to 0.98 (p < 0.001). Overall inter-reader agreement for all 3 readers was "very good," k = 0.82 (0.73 to 0.90, 95% CI, p < 0.001). Pair-wise agreement between readers were also "very good," k = 0.86-0.92. 14 out of 150 lesions were misclassified, with the most common error being down-scoring of a solid lesion with irregular outer contours. CONCLUSION: Even without specific training, experienced ultrasound readers can achieve excellent diagnostic performance and high inter-reader reliability with self-directed review of guidelines and cases. The study highlights the effectiveness of ACR O-RADS as a stratification tool for radiologists and supports its continued use in practice.
PURPOSE: The objective of this study is to evaluate the diagnostic accuracy, interobserver variability, and common lexicon pitfalls of the ACR O-RADS scoring system among staff radiologists without prior experience to O-RADS. MATERIALS AND METHODS: After independent review of the ACR O-RADS publications and 30 training cases, three fellowship-trained, board-certified staff radiologists scored 50 pelvic ultrasound exams using the O-RADS system. The diagnostic accuracy and area under receiver operating characteristic were analyzed for each reader. Overall agreement and pair-wise agreement between readers were also analyzed. RESULTS: Excellent specificities (92 to 100%), NPVs (92 to 100%), and variable sensitivities (72 to 100%), PPVs (66 to 100%) were observed. Considering O-RADS 4 and O-RADS 5 as predictors of malignancy, individual reader AUC values range from 0.94 to 0.98 (p < 0.001). Overall inter-reader agreement for all 3 readers was "very good," k = 0.82 (0.73 to 0.90, 95% CI, p < 0.001). Pair-wise agreement between readers were also "very good," k = 0.86-0.92. 14 out of 150 lesions were misclassified, with the most common error being down-scoring of a solid lesion with irregular outer contours. CONCLUSION: Even without specific training, experienced ultrasound readers can achieve excellent diagnostic performance and high inter-reader reliability with self-directed review of guidelines and cases. The study highlights the effectiveness of ACR O-RADS as a stratification tool for radiologists and supports its continued use in practice.
Authors: Rochelle F Andreotti; Dirk Timmerman; Beryl R Benacerraf; Genevieve L Bennett; Tom Bourne; Douglas L Brown; Beverly G Coleman; Mary C Frates; Wouter Froyman; Steven R Goldstein; Ulrike M Hamper; Mindy M Horrow; Marta Hernanz-Schulman; Caroline Reinhold; Lori M Strachowski; Phyllis Glanc Journal: J Am Coll Radiol Date: 2018-08-24 Impact factor: 5.532
Authors: Krupa K Patel-Lippmann; Elizabeth A Sadowski; Jessica B Robbins; Viktoriya Paroder; Lisa Barroilhet; Elizabeth Maddox; Timothy McMahon; Emmanuel Sampene; Ashish P Wasnik; Alexander D Blaty; Katherine E Maturen Journal: AJR Am J Roentgenol Date: 2019-11-26 Impact factor: 3.959
Authors: Danny C Kim; Genevieve L Bennett; Molly Somberg; Naomi Campbell; Byron Gaing; Michael P Recht; Ankur M Doshi Journal: J Am Coll Radiol Date: 2016-03-04 Impact factor: 5.532
Authors: Rochelle F Andreotti; Dirk Timmerman; Lori M Strachowski; Wouter Froyman; Beryl R Benacerraf; Genevieve L Bennett; Tom Bourne; Douglas L Brown; Beverly G Coleman; Mary C Frates; Steven R Goldstein; Ulrike M Hamper; Mindy M Horrow; Marta Hernanz-Schulman; Caroline Reinhold; Stephen L Rose; Brad P Whitcomb; Wendy L Wolfman; Phyllis Glanc Journal: Radiology Date: 2019-11-05 Impact factor: 11.105
Authors: Andrew B Rosenkrantz; X Xue; Soterios Gyftopoulos; Danny C Kim; Gregory N Nicola Journal: J Am Coll Radiol Date: 2018-05-02 Impact factor: 5.532
Authors: L Valentin; L Ameye; D Franchi; S Guerriero; D Jurkovic; L Savelli; D Fischerova; A Lissoni; C Van Holsbeke; R Fruscio; S Van Huffel; A Testa; D Timmerman Journal: Ultrasound Obstet Gynecol Date: 2012-12-17 Impact factor: 7.299
Authors: Ben Van Calster; Kirsten Van Hoorde; Lil Valentin; Antonia C Testa; Daniela Fischerova; Caroline Van Holsbeke; Luca Savelli; Dorella Franchi; Elisabeth Epstein; Jeroen Kaijser; Vanya Van Belle; Artur Czekierdowski; Stefano Guerriero; Robert Fruscio; Chiara Lanzani; Felice Scala; Tom Bourne; Dirk Timmerman Journal: BMJ Date: 2014-10-15
Authors: Yang Guo; Catherine H Phillips; Krista Suarez-Weiss; Lauren A Roller; Mary C Frates; Carol B Benson; Atul B Shinagare Journal: Radiol Imaging Cancer Date: 2022-09
Authors: Yang Du; Meredith Bara; Prayash Katlariwala; Roger Croutze; Katrin Resch; Jonathan Porter; Medica Sam; Mitchell P Wilson; Gavin Low Journal: World J Radiol Date: 2022-01-28
Authors: Neha Antil; Preethi R Raghu; Luyao Shen; Thodsawit Tiyarattanachai; Edwina M Chang; Craig W K Ferguson; Amanzo A Ho; Amelie M Lutz; Aladin J Mariano; L Nayeli Morimoto; Aya Kamaya Journal: Abdom Radiol (NY) Date: 2022-06-28