| Literature DB >> 35511333 |
Sarah J Lewis1, Natacha Borecky1,2, Tong Li1, Melissa L Barron1, Patrick Brennan1, Phuong Dung Yun Trieu3.
Abstract
Provision of online and remote specialist education and general continued professional education in medicine is a growing field. For radiology specifically, the ability to access web-based platforms that house high resolution medical images, and the high fidelity of simulated activities is increasingly growing due to positive changes in technology. This study investigates the differences in providing a self-directed specialist radiology education system in two modes: at clinics and in-person workshops. 335 Australian radiologists completed 562 readings of mammogram test sets through the web-based interactive BREAST platform with 325 at conference workshops and 237 at their workplaces. They engaged with test sets with each comprising of 60 mammogram cases (20 cancer and 40 normal). Radiologists marked the location of any cancers and had their performance measured via 5 metrics of diagnostic accuracy. Results show that the location of engagement with BREAST did not yield any significant difference in the performances of all radiologists and the same radiologists between two reading modes (P > 0.05). Radiologists who read screening mammograms for BreastScreen Australia performed better when they completed the test sets at designated workshops (P < 0.05), as was also the case for radiologists who read > 100 cases per week (P < 0.05). In contrast, radiologists who read less mammograms frequently recorded better performances in specificity and JAFROC at clinics (P < 0.05). Findings show that remotely accessed online education for specialised training and core skills building in radiology can provide a similar learning opportunity for breast radiologists when compared to on-site dedicated workshops at scientific meetings. For readers with high volumes of mammograms, a workshop setting may provide a superior experience while clinic setting is more helpful to less experienced readers.Entities:
Keywords: Breast cancer; Digital mammograms; Radiology; Remote training; Simulation
Year: 2022 PMID: 35511333 PMCID: PMC9069117 DOI: 10.1007/s13187-022-02156-w
Source DB: PubMed Journal: J Cancer Educ ISSN: 0885-8195 Impact factor: 1.771
Fig. 1A radiologist was reading a mammogram BREAST test set
Reading performances of radiologists in different groups of experience between workshop (W) and clinic (C)
| Role | Location (number of readings) | Specificity | Sensitivity | Lesion sensitivity | ROC | JAFROC |
|---|---|---|---|---|---|---|
| Non-BSA radiologists | Clinics (89) | 0.793 ± 0.126 | 0.745 ± 0.180 | 0.635 ± 0.203 | 0.801 ± 0.103 | 0.683 ± 0.129 |
| Workshops (169) | 0.733 ± 0.171 | 0.735 ± 0.169 | 0.592 ± 0.193 | 0.779 ± 0.093 | 0.626 ± 0.148 | |
| P value | 0.007 | 0.441 | 0.072 | 0.045 | 0.002 | |
| BSA radiologists | Clinics (148) | 0.777 ± 0.139 | 0.782 ± 0.153 | 0.688 ± 0.176 | 0.819 ± 0.087 | 0.706 ± 0.118 |
| Workshops (156) | 0.800 ± 0.144 | 0.845 ± 0.120 | 0.777 ± 0.131 | 0.867 ± 0.058 | 0.785 ± 0.083 | |
| P value | 0.036 | < 0.0001 | < 0.0001 | < 0.0001 | < 0.0001 | |
| Radiologists read ≤ 100 cases per week | Clinics (133) | 0.79 ± 0.134 | 0.751 ± 0.168 | 0.643 ± 0.186 | 0.802 ± 0.094 | 0.685 ± 0.121 |
| Workshops (204) | 0.738 ± 0.175 | 0.751 ± 0.172 | 0.618 ± 0.199 | 0.790 ± 0.093 | 0.649 ± 0.149 | |
| P value | 0.007 | 0.982 | 0.304 | 0.243 | 0.040 | |
| Radiologists read > 100 cases per week | Clinics (104) | 0.774 ± 0.133 | 0.790 ± 0.158 | 0.688 ± 0.187 | 0.824 ± 0.092 | 0.712 ± 0.123 |
| Workshops (121) | 0.811 ± 0.127 | 0.850 ± 0.102 | 0.786 ± 0.11 | 0.873 ± 0.051 | 0.792 ± 0.078 | |
| P value | 0.009 | 0.016 | < 0.0001 | < 0.0001 | < 0.0001 |
Fig. 2Feedback of a radiologist’ interpretation: a—Correct cancer case detection with an incorrect lesion location; b—Correct cancer case detection with correct lesion location
Fig. 3Overall performances of radiologists at Workshop (W) and Clinic (C)
Comparison of mammogram reading performances of non-BSA and BSA radiologists in clinics and workshops
| Reading locations | Role (number of readings) | Specificity | Sensitivity | Lesion Sensitivity | ROC | JAFROC |
|---|---|---|---|---|---|---|
| Clinics | Non-BSA (89) | 0.793 ± 0.126 | 0.745 ± 0.18 | 0.635 ± 0.203 | 0.801 ± 0.103 | 0.683 ± 0.129 |
| BSA (148) | 0.777 ± 0.139 | 0.782 ± 0.153 | 0.688 ± 0.176 | 0.819 ± 0.087 | 0.706 ± 0.118 | |
| P value | 0.454 | 0.147 | 0.101 | 0.315 | 0.240 | |
| Workshops | Non-BSA (169) | 0.733 ± 0.171 | 0.735 ± 0.169 | 0.592 ± 0.193 | 0.779 ± 0.093 | 0.626 ± 0.148 |
| BSA (156) | 0.800 ± 0.144 | 0.845 ± 0.12 | 0.777 ± 0.131 | 0.867 ± 0.058 | 0.785 ± 0.083 | |
| P value | < 0.0001 | < 0.0001 | < 0.0001 | < 0.0001 | < 0.0001 |
Paired comparison of reading performances of 13 BreastScreen radiologists at clinics and workshops
| Location | Specificity | Sensitivity | Lesion Sensitivity | ROC | JAFROC |
|---|---|---|---|---|---|
| Clinics | 0.730 ± 0.187 | 0.856 ± 0.09 | 0.770 ± 0.122 | 0.845 ± 0.065 | 0.745 ± 0.08 |
| Workshops | 0.810 ± 0.128 | 0.836 ± 0.094 | 0.760 ± 0.108 | 0.856 ± 0.054 | 0.772 ± 0.106 |
| P value | 0.093 | 0.322 | 0.583 | 0.552 | 0.279 |