Literature DB >> 31545009

Radiographic image interpretation by Australian radiographers: a systematic review.

Andrew Murphy1,2,3, Ernest Ekpo3, Thomas Steffens4, Michael J Neep5,6.   

Abstract

INTRODUCTION: Radiographer image evaluation methods such as the preliminary image evaluation (PIE), a formal comment describing radiographers' findings in radiological images, are embedded in the contemporary radiographer role within Australia. However, perceptions surrounding both the capacity for Australian radiographers to adopt PIE and the barriers to its implementation are highly variable and seldom evidence-based. This paper systematically reviews the literature to examine radiographic image interpretation by Australian radiographers and the barriers to implementation.
METHODS: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses were used to systematically review articles via Scopus, Ovid MEDLINE, PubMed, ScienceDirect and Informit. Articles were deemed eligible for inclusion if they were English language, peer-reviewed and explored radiographic image interpretation by radiographers in the context of the Australian healthcare system. Letters to the editor, opinion pieces, reviews and reports were excluded.
RESULTS: A total of 926 studies were screened for relevance, 19 articles met the inclusion criteria. The 19 articles consisted of 11 cohort studies, seven cross-sectional surveys and one randomised control trial. Studies exploring radiographers' image interpretation performance utilised a variety of methodological designs with accuracy, sensitivity and specificity values ranging from 57 to 98%, 45 to 98% and 68 to 98%, respectively. Primary barriers to radiographic image evaluation by radiographers included lack of accessible educational resources and support from both radiologists and radiographers.
CONCLUSION: Australian radiographers can undertake PIE; however, educational and clinical support barriers limit implementation. Access to targeted education and a clear definition of radiographers' image evaluation role may drive a wider acceptance of radiographer image evaluation in Australia.
© 2019 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.

Entities:  

Keywords:  General radiography; image interpretation; radiographer commenting; radiography; systematic review

Mesh:

Year:  2019        PMID: 31545009      PMCID: PMC6920699          DOI: 10.1002/jmrs.356

Source DB:  PubMed          Journal:  J Med Radiat Sci        ISSN: 2051-3895


Introduction

The initial evaluation of plain radiographic images for potential abnormalities by radiographers has been accepted practice in the United Kingdom (UK) since the early 1980s.1, 2 In an attempt to reduce diagnostic errors in the emergency department, Berman et al.1 proposed a system by which radiographers affixed a red sticker to plain X‐ray films they believed to be abnormal. The red sticker acted as a visual cue, alerting the referrer to a potential abnormality. This simple yet effective procedure was known as the ‘red dot system’.1 The red dot system, more recently known as a ‘Radiographer Abnormality Detection System’ (RADS)3, provided a time‐efficient overlap between emergency referrers and radiographers when assessing a plain radiographic image. The lack of written documentation as to what the radiographer was flagging is a notable communication flaw with the RADS.4 To address this limitation, RADS in the UK evolved to include a brief comment accompanying an examination, describing the flagged abnormality(ies). The brief accompanying remarks, known as the ‘radiographer comment’, were officially termed the preliminary clinical evaluation (PCE) in the UK.5, 6 The role of the medical imaging professional in the United Kingdom has expanded into more advanced roles; in some cases, appropriately trained radiographers perform independent diagnostic reporting.7 The support of the Society and College of Radiographers and the Royal College of Radiologists, along with low radiologist‐to‐population ratios and intensive university‐based postgraduate radiographer training courses, has allowed this role expansion to occur in the UK.7, 8, 9 Despite the advances in radiographer image interpretation in the UK, the role of the radiographer in image evaluation within Australia has remained comparably inactive. General radiographic image interpretation by radiographers in Australia has not progressed much past the initial discussion surrounding the ‘red dot system’.10 Due to this, radiographer reporting in Australia is not a consideration at this current time, nor is it explored in this review. Widespread implementation of radiographic image evaluation systems such as RADS is yet to be fully realised in practice. However, the Medical Radiation Practice Board of Australia (MRPBA) stipulates Australian radiographers must communicate significant clinical findings to the appropriate clinicians most preferably via a departmental protocol or instruction that standardises verbal or written communication with associated record keeping.11 The Australian Society of Medical Imaging and Radiation Therapy (ASMIRT) is currently developing a process to examine and certify radiographers to engage in radiographic image evaluation with a written component known as a preliminary image evaluation (PIE).12 A PIE is a brief written description that acts in the same way as a ‘radiographer comment’ or PCE in that it clearly communicates significant clinical findings to the referring clinician in the absence of a definitive radiologist report.13 It should be noted that the PIE is not a substitute for the radiologist report; it provides a timely communication of the presence of a potential abnormality to the referrer in order to support patient treatment decisions when the radiologist report is unavailable. Emergency doctors, nurse practitioners and physiotherapists also play an active role in interpreting medical images in Australia.14, 15, 16 The present review focuses on radiographer image interpretation, rather than other healthcare professionals who interpret medical images, which is beyond the scope of this review. Throughout this systematic review, the phrases ‘radiographic image interpretation’ and ‘radiographic image evaluation by radiographers’ pertain to abnormality detection systems such as RADS and commenting protocols such as PIE; they do not refer to, nor imply radiographer reporting. Image interpretation by radiographers is an internationally explored subject with inconstant definitions. Each term corresponds to a distinctive clinical practice, and to assist the reader in avoiding misinterpretation, the commonly used terms within the literature are defined in Table 1.
Table 1

Radiographic interpretation terms as defined in the literature.

Red dot systemRed sticker affixed to a radiographic film to flag a potential abnormality
Radiographer abnormality detection system (RADS)A ‘flagging’ system in which the radiographer will digitally affix an indicator to a radiographic image to indicate a potential abnormality
Preliminary clinical evaluation (PCE), preliminary image evaluation (PIE), radiographer commentingA brief written comment by a radiographer to communicate what they believe could be an abnormality. Not a definitive radiological report
Radiographer reportA definitive radiological report performed by a trained radiographer. Not explored in this review
Radiologist reportA definitive radiological report
Radiographic interpretation terms as defined in the literature. Research has shown that in the absence of a radiologist report, the implementation of radiographer image evaluation systems improves clinical decision‐making in emergency departments.1, 17 However, despite the MRPBA's expectation of radiographers (first published in 201311) to communicate clinically significant findings, progress has been slow. Importantly, there is a lack of understanding of the enablers and barriers to radiographer image evaluation in Australia. This review examines the literature on image evaluation by Australian radiographers, including the barriers to radiographer image evaluation systems. It aims to inform and assist the future implementation of such systems in Australia.

Methodology

Search strategy

A systematic review of the literature using the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) strategy was conducted via five databases (Scopus, Ovid MEDLINE, PubMed, ScienceDirect and Informit). A Google cross‐search was also conducted to identify articles not found in the database search, and reference lists of eligible articles were reviewed for additional studies. A hand search was also conducted throughout the contents of both the Journal of Medical Radiation Sciences and the Journal of Medical Imaging and Radiation Oncology. The following search terms were applied: ‘radiographer commenting’, ‘red dot system’, ‘Preliminary Image Evaluation’, ‘radiographic image interpretation’, ‘radiographer abnormality detection system’ and ‘radiographer reporting’. Search terms were combined with Australia using connectors such as ‘AND’ or ‘OR’. ‘Radiographer reporting’ was included in the search strategy due to the term being mistakenly used in earlier studies when referring to image evaluation system such as a RADS and PIE. The last search was conducted on 25 April 2019.

Eligibility criteria

Studies were deemed eligible for review if they were peer‐reviewed and focused on general radiographic image evaluation by radiographers in the context of Australian practice. Studies involving imaging modalities (e.g. mammography, magnetic resonance imaging, computed tomography) other than general radiography were excluded. Opinion pieces, review articles, letters to the editor, case reports and study protocols were also excluded. Studies were excluded if they were not written in English language. No restrictions were placed on publication date. All titles and abstracts were independently screened by two authors to identify studies that potentially met the eligibility criteria.

Data extraction and quality assessment

To mitigate the potential for biased opinions, data were independently extracted and analysed by two authors (AM and EE) using a modified McMaster critical appraisal tool.18 The modified McMaster critical appraisal tool was utilised due to the mixed methods of studies reviewed. Using this tool, information extracted from each article included year of publication, author details, title, objectives, methodology, pertinent findings including barriers and enablers to implementation and radiographer image interpretation performance. Studies were graded via the 15 criteria of the modified McMaster critical appraisal tool.18 Each study was assessed under the following criteria: clear study purpose, relevant literature review, clearly stated and appropriate design, appropriate and justified communication of the sample size including exclusions, ethics, and consent, reliable and valid outcomes, statistically defensible results, and finally, appropriate conclusions given the study. Each criterion present in the study was awarded 1 point with a maximum score of 15, and perfect studies (15/15) met all the requirements. Studies with a score above 10 were considered good quality, studies above 13, of excellent quality. Disagreements were resolved through discussion and consensus.

Results

The search strategy produced 926 articles. After the removal of duplicates, 689 articles were screened for eligibility. Following the screening of the abstracts and titles of these 689 articles against the inclusion/exclusion criteria, 660 were further excluded. The full texts of the remaining 29 articles were then examined, and 19 studies published between 1997 and 2019 were deemed eligible for inclusion in the review. Figure 1 demonstrates a flow chart of the search strategy and the number of articles identified.
Figure 1

Preferred Reporting Items for Systematic Reviews and Meta‐Analyses flow chart.

Preferred Reporting Items for Systematic Reviews and Meta‐Analyses flow chart.

Study characteristics

The characteristics of the included studies are described in Table 2. Of the 19 studies,3, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36 11 were cohort investigations exploring diagnostic performance of radiographers19, 20, 21, 22, 23, 25, 26, 27, 33, 34, 35 and seven were cross‐sectional questionnaires3, 24, 28, 29, 30, 31, 32 surveying the topic of radiographer image evaluation amongst radiographers. [Correction added on 1 October 2019, after first online publication: This sentence was corrected to include the number of studies that were cohort investigations.] The remaining study utilised a randomised control trial study design,36 examining the effectiveness of intensive versus non‐intensive image interpretation education for radiographers.
Table 2

Study characteristics, results, and critical appraisal score of the reviewed literature

AuthorsStudy type and aimParticipantsSetting & designResultCritical appraisal score (out of 15)
Orames, 199719

Cohort study

Explore general X‐ray interpretation ability and accuracy of radiographers compared to ED physicians

Radiographers, differing experience

ED physicians, differing experience

Single site (public hospital)

Radiographers and ED physicians completed survey over 1‐month period for each suspected abnormal radiograph

541 emergency radiographs

Compared to single radiology report

ACC Radiographers = 87%

ACC ED physicians (only responded for 106 patients) = 89.8%

ACC Radiographers (for the same 106 patients) = 89.1%

7
Jane, Hall & Egan 199920

Cohort study

Investigate radiographer's ability to:

detect normal and abnormal radiographs

provide a provisional diagnosis in conjunction with RDS

Radiographers, differing experience

Single site (public hospital)

Conducted over three 1‐month periods in 1992, 1994 and 1997

Radiographer recorded normal, abnormal or ‘don't know’ on a form for each emergency radiological investigation. 1997 trial included a written ‘provisional diagnosis’

940 examinations (including CT and ultrasound)

Compared to single radiology report

ACC 91.2% when determining normal vs. abnormal radiograph

ACC 85% when assessing the provisional written diagnosis compared to radiology report

6
Younger & Smith, 200221

Cohort study

Measure the ability of radiographers to provide a provisional diagnosis alongside RDS using an opinion form

26 radiographers, differing experience

Single site (public hospital)

Participants filled out a pre‐designed opinion form, indicating radiological impression

820 emergency examinations over a 3‐month period

Compared to single radiology report

ACC 93%

SENS 94.8%

SPEC 91.7 %

PPV 89.5%

NPV 96%

10
Cook, Oliver, & Ramsay, 200422

Cohort study

Investigate the accuracy and effectiveness of two radiographers in reporting appendicular MSK radiographs in the adult trauma setting

2 radiographers, 8 and 14 years of experience, with one undertaking a master's degree in radiographic image interpretation

Single site (public hospital)

Both participants undertook image interpretation training at a departmental level

Participants shadow reported 527 MSK appendicular radiographs referred from ED

Compared to single radiology report

ACC 98.48%

SENS 98.97%

SPEC 96.49%

PPV 99.30%

NPV 97.40%

7
Smith, Traise, & Cook, 200923

Experimental cohort study

Evaluate the accuracy of radiographers in interpreting MSK radiographs

Assess the impact of a short continuous education model on accuracy

16 radiographers, differing experience

Multisite

400 abnormal radiographs from axial and appendicular skeleton, categorised into three grades of complexity, grouped into test banks of 25

Participants interpreted 25 images using a pre‐designed opinion form to record opinion, observation and an open comment

Following an intervention of self‐guided education, the cohort was retested after 4 months using the same 25 images

Rating of opinions vs radiologist report was subjectively assessed for clinical significance

Compared to single radiology report

Opinion

Pre‐intervention ACC 75.3%

Post‐intervention ACC 83.8%

Observation

Pre‐intervention ACC 68.0%

Post‐intervention ACC 77.1%

Open comment

Pre‐intervention ACC 57.3%

Post‐intervention ACC 61.0%

8
Hardy, Poulos, Emanuel, & Reed, 201024

Multisite survey

Investigate advanced practice carried out by radiographers in NSW

69 medical imaging supervisors (response rate 60%, = 69/115)

Multisite survey (public and private practices)

Participants completed questionnaire exploring the use of advanced practice (triage systems, formal or informal reporting, research, cannulation, IV contrast administration) in their respective departments

39.7% of medical imaging directors stated their sites utilised a RADS

Barrier identified

There is a perception that radiographers do not possess the relevant knowledge for radiographic image interpretation

7
Brown & Leschke, 201226

Retrospective analysis

Evaluate the accuracy and clinical value of the RDS

Non‐voluntary cohort

Single site (public hospital)

Retrospective audit of 3638 appendicular MSK radiographs from ED over 4‐month period, focusing on fracture detection

Assessed for the presence of a ‘red dot’

Images without a red dot, found to be obvious fractures, considered true positives

Compared to single radiology report

Identifying appendicular fractures

SENS 80.4%

SPEC 98.0%

PPV 93.5%

NPV 93.5%

A subgroup analysis of non‐displaced fractures (<1 mm displacement)

SENS 45.8%

SPEC 98.0%

PPV 74.8%

NPV 93.5%

3
McConnell, Devaney, Gordon, Goodwin, Strahan,& Baird, 201225

Experimental cohort study

Investigate the effect a pilot educational intervention has on radiographer's ability to describe radiographic abnormalities

10 radiographers, differing experience

Multisite (public hospital)

102 adult appendicular MSK trauma radiographs

Test bank reflected population injury incidence as well as incidence of body region, diagnosis validated by four radiologists

Participants filled out a worksheet with tick box and comment components, with a separate field to indicate level of certainty

Images assessed before, immediately following, and 8–10 weeks after a tailored education programme

Pre‐ and immediately post‐education programme

ACC 82.0, 81.4%

SENS 87.3, 90.8%

SPEC 78.9, 76.0%

PPV 71.0, 70.0%

NPV 92.0, 94.0%

8‐10 weeks following education programme

ACC 86.8%

SENS 93.5%

SPEC 82.9%

PPV 77.0%

NPV 96.0%

12
McConnell, Devaney, & Gordon, 201327

Experimental cohort study

Investigate the effect of an educational programme on radiographers’ ability to describe radiographic abnormalities

10 radiographers, 2 with postgraduate image interpretation education

ED physicians, differing experience

Multisite (public hospital)

655 MSK trauma radiographs obtained from multiple EDs

Radiographers underwent education programme regarding interpretation of MSK trauma radiographs

Radiographers interpreted images at a real‐time workload in their respective EDs

Radiographers filled out a worksheet with a tick box and comment component

Compared to ED physician notes and radiologist reports

No statistically significant difference between radiographers and ED physicians

Radiographers

ACC 88.6%

SENS 94.8%

SPEC 94.8%

PPV 97.0%

NPV 94.8%

ED physicians

ACC 89.5%

SENS 90.8%

SPEC 96.8%

PPV 97.5%

NPV 94.8%

9
Neep, Steffens, Owen, & McPhail, 201430

Multisite survey

Investigate radiographer participation in RADS, the perception of radiographer image interpretation in the emergency department

Explore perceived barriers, benefits and enablers to radiographer commenting

73 radiographers, differing experience (response rate 68%, = 73/108)

Multisite (public hospitals)

Cross‐sectional multisite survey consisting of closed‐ and open‐ended questions

Barriers identified

Education access

Lack of time

Low radiographer confidence

Inconsistency of guidelines

Radiographer resistance

Radiologist perception this is a threat to their role

Varying levels of RADS participation identified

10
Neep, Steffens, Owen, & McPhail, 201429

Multisite survey

Investigate radiographer perception of both their readiness to participate in a radiographer commenting system and preferred image interpretation education method

73 radiographers, differing experience (response rate 68%, = 73/108)

Multisite (public hospital)

Cross‐sectional multisite survey of four major metropolitan hospitals

Questionnaire consisted of four sections testing demographics, self‐perceived confidence in interpreting radiographs, perceived level of accuracy in interpreting radiographs and the preference for educational delivery of either long term (eight 90‐min sessions) or short term (2‐day intensive course)

Barrier identified

Radiographers are not confident in describing radiographic MSK abnormalities

10
Page, Bernoth, & Davidson, 201428

Interpretative phenomenological study

Identify the factors that are hindering or promoting the progression of advanced practice in the Australian radiography profession

7 participants with a radiography or radiation therapy background, working in senior positions (response rate 70%, = 7/10)

Multisite (public hospitals)

25 questions were sent out to participants to be examined before an in‐depth verbal interview

Questions explored themes relating to barriers and enablers of advanced practice in Australia

Barriers identified

Radiologist perception this is a threat to their role

Education

Lack of direction from professional bodies

Lack of engagement with other professions

8
Squibb, Bull, Smith, & Dalton, 201531

Exploratory interpretive study

Explore rural Australian radiographers’ perspective on disclosing opinions regarding radiographic imaging

185 radiographers from rural workplaces across Australia (response rate 32.4%, = 185/571)

Multisite (rural centres)

Two‐phase study

Multisite postal survey distributed to rural NSW, WA and TAS, collecting descriptive data regarding respondent demographics

Nine face‐to‐face or phone interviews exploring demographics, attitude regarding radiographic image interpretation

Barrier identified

Lack of understanding around the legality in disclosing opinions

9
Squibb, Smith, Dalton, & Bull, 201632

Two‐phase multisite survey with an interview component

Investigate how rural radiographers communicate and disclose their radiographic opinion to referring physicians in their respective departments

185 radiographers from rural workplaces across Australia (response rate 32.4%, = 185/571)

Multisite (rural centres)

Two‐phase study

Multisite postal survey distributed to rural NSW, WA and TAS, collecting descriptive data regarding respondent demographics

Nine semistructured interviews to obtain qualitative data regarding interprofessional communication

Barriers identified

Lack of formal education

Lack of communication skills to convey abnormal findings

Interprofessional boundaries due to a historical hierarchy

9
McConnell & Baird, 201733

Experimental cohort study

To measure and compare the ability of final‐year medical students and radiographers in interpreting MSK trauma plain radiographic images

16 radiographers with at least 2 years of experience working in an ED setting

16 final‐year medical students

Volunteer participants

650 MSK trauma radiographs selected for assessment

209 radiographs chosen in total to reflect injury prevalence (adult and paediatrics), validated by four radiologists

Images sent to participants via a USB device

Responses provided via an electronic worksheet

Radiographers

ACC 86.84% (79.43‐89.95)

ROC (fit) 0.955 (0.943‐0.977)

Medical students

ACC 81.34% (77.99‐89.95)

ROC (fit) 0.917 (0.861‐0.948)

15
Neep, Steffens, Riley, Eastgate, & McPhail, 201734

Experimental cohort study

To develop and examine a valid and reliable test to assess radiographers’ ability to interpret trauma radiographs

41 radiographers, differing experience (minimum 12 months)

Volunteer participants

Two‐phase study

Establish a typical anatomical region case‐mix of trauma radiographs from 14,159 cases, to be developed into image interpretation examination

Prospective investigation of its validity and reliability

Association between participant confidence and test score was positively associated (coefficient = 1.52, r 2 = 0.6, P < 0.001)

15
Murphy & Neep, 20183

Multisite survey

Investigate the use of RADS in QLD public hospitals

25 medical imaging directors (response rate 89%, = 25/28)

Multisite (public hospitals)

Cross‐sectional web‐based questionnaire

Survey explored hospital demographics, the use of RADS in their respective departments and potential factors preventing implementation

16% of sites had a RADS in place

Barriers identified

Education

Lack of resources

Perception of inadequate staff education

9
Williams, Baird, Pearce, & Schneider, 201935

Cohort study

Investigate radiographer performance in interpreting appendicular skeletal radiographs after the intervention of two short education modules

Eight radiographers, differing experience

Volunteer participants

Participants were tested via a random assortment of 25 images (normal to abnormal ratio 25:75) before and after two learning modules

The two modules consisted of an upper limb and shoulder girdle portion and a lower limb and pelvic girdle portion

Participants tested again 6 months post‐intervention to assess retainment

Pre‐test module 1 to post‐test module 2 results

ACC 81.68, 85.97%

SENS 82.28, 86.25%

SPEC 75.29, 84.66%

PPV 97.23, 95.67%

NPV 79.24, 61.0%

6 months post‐intervention

ACC 81.34%

SENS 83.51%

SPEC 68.33%

PPV 94.05%

NPV 40.85%

10
Neep, Steffens, Eastgate, & McPhail, 201936

Randomised control trial

Compare two methods of image interpretation education and their effects on radiographers’ ability to interpret radiographs

42 radiographers with no formal education in image interpretation (minimum 12 months of experience)

Volunteer participants

60 images from validated test bank – mix of cases, injury prevalence accounted for

Cases assessed by two radiologists and 1 reporting radiographer

Participants completed a baseline image interpretation examination before being placed into two randomised cohorts

Intensive education (2‐day, 13.5‐hour delivery)

Non‐intensive education (9 weeks, 90 min a week)

Participants were tested again 1 week and 12 weeks post‐intervention to assess retainment

Baseline median test score

Intensive 184 (141–215)

Non‐intensive 186 (163–216)

1‐week post‐intervention median test score

Intensive 168 (101–230)

Non‐intensive 150 (128–202)

12‐week post‐intervention median test score

Intensive 220 (178–237)

Non‐intensive 188 (159–216)

15

ACC, accuracy; CT, computed tomography; ED, emergency department; IV, intravenous; MSK, musculoskeletal; NPV, negative predictive value; NSW, New South Wales; PPV, positive predictive value; QLD, Queensland; RADS, radiographer abnormality detection system; RDS, red dot system; ROC, receiver operating characteristic; SENS, sensitivity; SPEC, specificity; TAS, Tasmania; USB, universal serial bus; WA, Western Australia.

Study characteristics, results, and critical appraisal score of the reviewed literature Cohort study Explore general X‐ray interpretation ability and accuracy of radiographers compared to ED physicians Radiographers, differing experience ED physicians, differing experience Single site (public hospital) Radiographers and ED physicians completed survey over 1‐month period for each suspected abnormal radiograph 541 emergency radiographs Compared to single radiology report ACC Radiographers = 87% ACC ED physicians (only responded for 106 patients) = 89.8% ACC Radiographers (for the same 106 patients) = 89.1% Cohort study Investigate radiographer's ability to: detect normal and abnormal radiographs provide a provisional diagnosis in conjunction with RDS Radiographers, differing experience Single site (public hospital) Conducted over three 1‐month periods in 1992, 1994 and 1997 Radiographer recorded normal, abnormal or ‘don't know’ on a form for each emergency radiological investigation. 1997 trial included a written ‘provisional diagnosis’ 940 examinations (including CT and ultrasound) Compared to single radiology report ACC 91.2% when determining normal vs. abnormal radiograph ACC 85% when assessing the provisional written diagnosis compared to radiology report Cohort study Measure the ability of radiographers to provide a provisional diagnosis alongside RDS using an opinion form 26 radiographers, differing experience Single site (public hospital) Participants filled out a pre‐designed opinion form, indicating radiological impression 820 emergency examinations over a 3‐month period Compared to single radiology report ACC 93% SENS 94.8% SPEC 91.7 % PPV 89.5% NPV 96% Cohort study Investigate the accuracy and effectiveness of two radiographers in reporting appendicular MSK radiographs in the adult trauma setting 2 radiographers, 8 and 14 years of experience, with one undertaking a master's degree in radiographic image interpretation Single site (public hospital) Both participants undertook image interpretation training at a departmental level Participants shadow reported 527 MSK appendicular radiographs referred from ED Compared to single radiology report ACC 98.48% SENS 98.97% SPEC 96.49% PPV 99.30% NPV 97.40% Experimental cohort study Evaluate the accuracy of radiographers in interpreting MSK radiographs Assess the impact of a short continuous education model on accuracy 16 radiographers, differing experience Multisite 400 abnormal radiographs from axial and appendicular skeleton, categorised into three grades of complexity, grouped into test banks of 25 Participants interpreted 25 images using a pre‐designed opinion form to record opinion, observation and an open comment Following an intervention of self‐guided education, the cohort was retested after 4 months using the same 25 images Rating of opinions vs radiologist report was subjectively assessed for clinical significance Compared to single radiology report Opinion Pre‐intervention ACC 75.3% Post‐intervention ACC 83.8% Observation Pre‐intervention ACC 68.0% Post‐intervention ACC 77.1% Open comment Pre‐intervention ACC 57.3% Post‐intervention ACC 61.0% Multisite survey Investigate advanced practice carried out by radiographers in NSW 69 medical imaging supervisors (response rate 60%, n = 69/115) Multisite survey (public and private practices) Participants completed questionnaire exploring the use of advanced practice (triage systems, formal or informal reporting, research, cannulation, IV contrast administration) in their respective departments 39.7% of medical imaging directors stated their sites utilised a RADS Barrier identified There is a perception that radiographers do not possess the relevant knowledge for radiographic image interpretation Retrospective analysis Evaluate the accuracy and clinical value of the RDS Non‐voluntary cohort Single site (public hospital) Retrospective audit of 3638 appendicular MSK radiographs from ED over 4‐month period, focusing on fracture detection Assessed for the presence of a ‘red dot’ Images without a red dot, found to be obvious fractures, considered true positives Compared to single radiology report Identifying appendicular fractures SENS 80.4% SPEC 98.0% PPV 93.5% NPV 93.5% A subgroup analysis of non‐displaced fractures (<1 mm displacement) SENS 45.8% SPEC 98.0% PPV 74.8% NPV 93.5% Experimental cohort study Investigate the effect a pilot educational intervention has on radiographer's ability to describe radiographic abnormalities 10 radiographers, differing experience Multisite (public hospital) 102 adult appendicular MSK trauma radiographs Test bank reflected population injury incidence as well as incidence of body region, diagnosis validated by four radiologists Participants filled out a worksheet with tick box and comment components, with a separate field to indicate level of certainty Images assessed before, immediately following, and 8–10 weeks after a tailored education programme Pre‐ and immediately post‐education programme ACC 82.0, 81.4% SENS 87.3, 90.8% SPEC 78.9, 76.0% PPV 71.0, 70.0% NPV 92.0, 94.0% 8‐10 weeks following education programme ACC 86.8% SENS 93.5% SPEC 82.9% PPV 77.0% NPV 96.0% Experimental cohort study Investigate the effect of an educational programme on radiographers’ ability to describe radiographic abnormalities 10 radiographers, 2 with postgraduate image interpretation education ED physicians, differing experience Multisite (public hospital) 655 MSK trauma radiographs obtained from multiple EDs Radiographers underwent education programme regarding interpretation of MSK trauma radiographs Radiographers interpreted images at a real‐time workload in their respective EDs Radiographers filled out a worksheet with a tick box and comment component Compared to ED physician notes and radiologist reports No statistically significant difference between radiographers and ED physicians Radiographers ACC 88.6% SENS 94.8% SPEC 94.8% PPV 97.0% NPV 94.8% ED physicians ACC 89.5% SENS 90.8% SPEC 96.8% PPV 97.5% NPV 94.8% Multisite survey Investigate radiographer participation in RADS, the perception of radiographer image interpretation in the emergency department Explore perceived barriers, benefits and enablers to radiographer commenting 73 radiographers, differing experience (response rate 68%, n = 73/108) Multisite (public hospitals) Cross‐sectional multisite survey consisting of closed‐ and open‐ended questions Barriers identified Education access Lack of time Low radiographer confidence Inconsistency of guidelines Radiographer resistance Radiologist perception this is a threat to their role Varying levels of RADS participation identified Multisite survey Investigate radiographer perception of both their readiness to participate in a radiographer commenting system and preferred image interpretation education method 73 radiographers, differing experience (response rate 68%, n = 73/108) Multisite (public hospital) Cross‐sectional multisite survey of four major metropolitan hospitals Questionnaire consisted of four sections testing demographics, self‐perceived confidence in interpreting radiographs, perceived level of accuracy in interpreting radiographs and the preference for educational delivery of either long term (eight 90‐min sessions) or short term (2‐day intensive course) Barrier identified Radiographers are not confident in describing radiographic MSK abnormalities Interpretative phenomenological study Identify the factors that are hindering or promoting the progression of advanced practice in the Australian radiography profession 7 participants with a radiography or radiation therapy background, working in senior positions (response rate 70%, n = 7/10) Multisite (public hospitals) 25 questions were sent out to participants to be examined before an in‐depth verbal interview Questions explored themes relating to barriers and enablers of advanced practice in Australia Barriers identified Radiologist perception this is a threat to their role Education Lack of direction from professional bodies Lack of engagement with other professions Exploratory interpretive study Explore rural Australian radiographers’ perspective on disclosing opinions regarding radiographic imaging 185 radiographers from rural workplaces across Australia (response rate 32.4%, n = 185/571) Multisite (rural centres) Two‐phase study Multisite postal survey distributed to rural NSW, WA and TAS, collecting descriptive data regarding respondent demographics Nine face‐to‐face or phone interviews exploring demographics, attitude regarding radiographic image interpretation Barrier identified Lack of understanding around the legality in disclosing opinions Two‐phase multisite survey with an interview component Investigate how rural radiographers communicate and disclose their radiographic opinion to referring physicians in their respective departments 185 radiographers from rural workplaces across Australia (response rate 32.4%, n = 185/571) Multisite (rural centres) Two‐phase study Multisite postal survey distributed to rural NSW, WA and TAS, collecting descriptive data regarding respondent demographics Nine semistructured interviews to obtain qualitative data regarding interprofessional communication Barriers identified Lack of formal education Lack of communication skills to convey abnormal findings Interprofessional boundaries due to a historical hierarchy Experimental cohort study To measure and compare the ability of final‐year medical students and radiographers in interpreting MSK trauma plain radiographic images 16 radiographers with at least 2 years of experience working in an ED setting 16 final‐year medical students Volunteer participants 650 MSK trauma radiographs selected for assessment 209 radiographs chosen in total to reflect injury prevalence (adult and paediatrics), validated by four radiologists Images sent to participants via a USB device Responses provided via an electronic worksheet Radiographers ACC 86.84% (79.43‐89.95) ROC (fit) 0.955 (0.943‐0.977) Medical students ACC 81.34% (77.99‐89.95) ROC (fit) 0.917 (0.861‐0.948) Experimental cohort study To develop and examine a valid and reliable test to assess radiographers’ ability to interpret trauma radiographs 41 radiographers, differing experience (minimum 12 months) Volunteer participants Two‐phase study Establish a typical anatomical region case‐mix of trauma radiographs from 14,159 cases, to be developed into image interpretation examination Prospective investigation of its validity and reliability Association between participant confidence and test score was positively associated (coefficient = 1.52, r 2 = 0.6, P < 0.001) Multisite survey Investigate the use of RADS in QLD public hospitals 25 medical imaging directors (response rate 89%, n = 25/28) Multisite (public hospitals) Cross‐sectional web‐based questionnaire Survey explored hospital demographics, the use of RADS in their respective departments and potential factors preventing implementation 16% of sites had a RADS in place Barriers identified Education Lack of resources Perception of inadequate staff education Cohort study Investigate radiographer performance in interpreting appendicular skeletal radiographs after the intervention of two short education modules Eight radiographers, differing experience Volunteer participants Participants were tested via a random assortment of 25 images (normal to abnormal ratio 25:75) before and after two learning modules The two modules consisted of an upper limb and shoulder girdle portion and a lower limb and pelvic girdle portion Participants tested again 6 months post‐intervention to assess retainment Pre‐test module 1 to post‐test module 2 results ACC 81.68, 85.97% SENS 82.28, 86.25% SPEC 75.29, 84.66% PPV 97.23, 95.67% NPV 79.24, 61.0% 6 months post‐intervention ACC 81.34% SENS 83.51% SPEC 68.33% PPV 94.05% NPV 40.85% Randomised control trial Compare two methods of image interpretation education and their effects on radiographers’ ability to interpret radiographs 42 radiographers with no formal education in image interpretation (minimum 12 months of experience) Volunteer participants 60 images from validated test bank – mix of cases, injury prevalence accounted for Cases assessed by two radiologists and 1 reporting radiographer Participants completed a baseline image interpretation examination before being placed into two randomised cohorts Intensive education (2‐day, 13.5‐hour delivery) Non‐intensive education (9 weeks, 90 min a week) Participants were tested again 1 week and 12 weeks post‐intervention to assess retainment Baseline median test score Intensive 184 (141–215) Non‐intensive 186 (163–216) 1‐week post‐intervention median test score Intensive 168 (101–230) Non‐intensive 150 (128–202) 12‐week post‐intervention median test score Intensive 220 (178–237) Non‐intensive 188 (159–216) ACC, accuracy; CT, computed tomography; ED, emergency department; IV, intravenous; MSK, musculoskeletal; NPV, negative predictive value; NSW, New South Wales; PPV, positive predictive value; QLD, Queensland; RADS, radiographer abnormality detection system; RDS, red dot system; ROC, receiver operating characteristic; SENS, sensitivity; SPEC, specificity; TAS, Tasmania; USB, universal serial bus; WA, Western Australia.

Radiographer image interpretation studies

The 19 studies reviewed explored two primary themes: studies exploring radiographers interpreting radiographs (n = 12)19, 20, 21, 22, 23, 25, 26, 27, 33, 34, 35, 36 and studies investigating the use of and the barriers to radiographer image evaluation systems (n = 7).3, 24, 28, 29, 30, 31, 32 The results of the studies exploring radiographers’ ability to interpret radiographs are presented in Table 2. Accuracy, sensitivity and specificity values ranged from 57 to 98% (n = 9),19, 20, 21, 22, 23, 25, 27, 33, 35 68 to 98% (n = 5)21, 22, 26, 27, 35 and 68 to 98% (n = 5),21, 22, 26, 27, 35 respectively. [Correction added on 1 October 2019, after first online publication: The range of sensitivity values has been corrected.] Twelve of the 19 studies summarised in Table 2 were observer performance studies, of which 10 (n = 10/12) were cohort studies.19, 20, 21, 22, 23, 25, 27, 33, 34, 35 Two of the cohort studies examined the radiographer's diagnostic opinion compared with emergency medical officers; however, the methodology of the two varied immensely.19, 27 The first comparison study of radiographers and emergency doctors made in 1997 was purely comparison with no educational intervention.19 The second study, conducted in 2013, compared the image interpretation ability of radiographers who received targeted education to emergency doctors, and found both cohorts to demonstrate a similar diagnostic performance.27 A single‐cohort study compared final‐year medical students to radiographers and reported that radiographers had a higher overall accuracy and receiver operator curve (ROC) fit.33 Three cohort studies examined radiographers’ ability to interpret radiographs against the radiology report, with radiographer accuracy measures ranging from 85 to 98%.19, 21, 22 A single‐cohort study investigated the reliability and validity of an image interpretation examination for further use in the testing of radiographers’ ability to interpret radiographs and found a positive association between the radiographers’ confidence and the result of the image test bank.34 Three of the cohort studies examined the effect that an educational intervention had on radiographers’ ability to interpret radiographs and concluded that education had a positive short‐term effect on performance23, 25, 35 This outcome was shared by the aforementioned comparison study by McConnell that involved radiographers and emergency doctors.27 The remaining two interpretation studies were comprised of a retrospective review26 and a randomised control trial.36 The retrospective study compared a cohort of radiographers participating in a voluntary red dot system against radiology reports, with regard to the detection of appendicular fractures. This study suggested radiographers found it challenging to detect subtle non‐displaced fractures (<1 mm displacement).26 The randomised control trial assessed the effectiveness of two formats of image interpretation education designed to improve radiographers’ ability to interpret radiographs. The outcome of this trial indicated that the intensive radiographic image interpretation education (13.5 h over 2 days) resulted in a greater improvement in radiographer interpretive performance than that of a non‐intensive format (i.e. traditional) of education (13.5 h over 9 weeks).36

Cross‐sectional studies exploring the use of, and barriers to, radiographer image interpretation

The results of the seven studies investigating the use of, and the barriers to, radiographer image interpretation are presented in Table 2.3, 24, 28, 29, 30, 31, 32 Of the seven studies, three multisite surveys explored radiographer image interpretation.3, 24, 28 The four remaining multisite surveys targeted senior medical imaging personnel and explored the barriers, use and perception of image interpretation by radiographers of varying experience.29, 30, 31, 32 When exploring the prevalence of radiographers involved in an image evaluation system, results ranged from 16 to 82%.3, 24, 30 The seven studies in this category came to similar conclusions regarding the barriers to implementation, namely that radiographers perceived they did not have access to appropriate education to participate in image evaluation systems or had insufficient support from either radiologists or their radiographer peers.3, 24, 28, 29, 30, 31, 32

Study quality

Following completion of the critical quality assessment, three articles33, 34, 36 were deemed to be of excellent quality (15/15 points), one article25 to be of good quality (12/15 points) and two20, 26 to be of the lowest quality scoring 6/15 and 3/15 points, respectively. The results of the critical analysis are detailed in Table 2. Studies that scored lower on the appraisal form had one or more of the following limitations: provided limited information regarding study purpose and design; lacked clarity regarding choice of sample size; did not provide ethics approval details; had no report of outcome measure, validation or reliability; did not perform inferential statistical analysis; the conclusions were not supported by the study methodology and results. The methodological quality of the studies examined varied. The majority (n = 18)3, 19, 20, 21, 22, 23, 24, 25, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36 of articles clearly stated a purpose and included a relevant background literature review. The study design was not explicitly stated in just under half of the articles reviewed (n = 9)19, 20, 21, 22, 23, 25, 26, 27, 35; however, a majority (n = 17)3, 19, 20, 21, 22, 24, 25, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36 employed an appropriate study design based on the aims.

Discussion

Over the past 21 years, many attempts have been made to (1) measure the ability of radiographers to undertake general radiograph evaluation or participate in commenting systems such as RADS and PIE and (2) explore the enablers and barriers to the implementation of these systems. Evidence from the literature reveals that radiographers display varying levels of ability to detect and describe radiographic abnormalities with performance metrics ranging from poor to excellent.19, 20, 21, 22, 23, 25, 26, 27, 33, 35 This variation is in part due to methodological differences between studies. Therefore, caution is advised when interpreting the performance results as a whole. Although the performance metrics observed for this cohort of radiographers varied considerably, one finding remained consistent: targeted radiographic image interpretation education, whether that be through self‐guided modules, or within a structured classroom environment, improves radiographers’ capacity to undertake radiographic image evaluation.23, 25, 27, 35, 36 Sixty per cent of the studies reviewed explored radiographer image interpretation performance. These studies can be further divided into two subsets of investigations: the baseline performance of radiographers’ ability to interpret radiographs19, 20, 21, 26, 33 and the effectiveness of image interpretation education.22, 23, 25, 27, 34, 35, 36 A prominent element in this review was the variation in study design, in both quality and methodology. When designing image interpretation studies, it is important to consider appropriate design of image bank contents, development and testing, as well as development of a reference standard to ensure a reliable and valid result. When testing image interpretation ability, it is essential to employ an image bank that reflects both the typical injury prevalence and the proportion of anatomy examined in the clinical setting to mitigate biases that may limit the relevance of their findings.37, 38, 39 Radiographers’ ability to describe findings in radiographs without educational intervention, known as a ‘baseline,’ is one such metric researchers can use when exploring improvement methods. The significance of such studies cannot be overstated, and the methodological approach must be accounted for when drawing conclusions from these studies. Five of the studies reviewed19, 20, 21, 26, 33 established a baseline performance metric. Four of these five studies19, 20, 21, 26 had a methodological flaw regarding participant image selection bias, whereby participants could abstain from interpreting images if they felt it was not necessary (obvious pathology or challenging cases). In practice, this can lead to radiographer non‐participation in cases considered ‘too difficult’ or ‘too easy’, resulting in a non‐reliable metric when reporting on radiographic image interpretation performance. The remaining performance study,33 which did not employ an interventional component when comparing the image interpretation ability of medical students to that of radiographers, made a considerable effort to overcome the bias detailed above.19, 20, 21, 26 The 201733 study demonstrated the potential for radiographers to aid junior doctors in radiographic image interpretation and the benefit this may have in clinical practice. A theme identified from the literature was determining the appropriate format of an educational intervention and the effectiveness of said format on radiographers’ ability to interpret radiographs. The quality of performance studies examining the effect of education varied. Six studies explored the effect of an intervention on radiographers’ ability to interpret radiographs,22, 23, 25, 27, 35, 36 whilst one study was dedicated entirely to creating a valid and reliable test bank.34 Although the experience levels within the radiography workforce are inherently inconsistent, education was not a heavily considered variable. It would be of benefit to assess radiographers at entry to the workforce to better reflect the entry standard of the profession. It was therefore difficult to extrapolate the outcomes of studies that included radiographers with postgraduate image interpretation education to the greater Australian healthcare setting, a notable shortcoming observed in two studies.22, 25 Education alone, regardless of the method or format, improved radiographers’ ability to describe radiographic findings,23, 25, 27, 35, 36 with performance similar to that of emergency doctors.27 However, it is important to employ educational interventions that improve radiographers’ ability to retain radiographic image interpretation ability.35, 36 These interventions need to improve radiographers’ baseline ability not only in the short term but provide skill sets needed for continuous practice.35 The single randomised control trial reviewed36 showed that the performance of radiographers to interpret radiographs was far better after a condensed programme of education compared to multiple sessions over a number of weeks. The development of the reliable and valid test bank34 utilised in this randomised control trial created an opportunity to examine radiographers’ performance via a standardised approach.36 The results of the randomised control trial alone suggest departments should opt for condensed education programmes to improve image interpretation, whilst the issues of skill retention could be further measured and addressed using a validated radiographic image bank, completed at regular intervals. This could be incorporated as part of an annual skills competency. The quality of the literature exploring the performance of radiographers in detecting and describing radiographic findings is notably higher in more recent publications. The only studies that scored the maximum mark of 15 following the critical appraisal were published between 2017 and 2019.33, 34, 36 Performance studies following 2012,25, 27, 33, 34, 35, 36 with the exception of the lowest scoring article reviewed (3/15),26 were meticulous not only in addressing bias, but in ensuring results were reliable and valid. Although studies preceding 2012 have their merits, caution is advised when citing them for the rationalisation for or against radiographic image evaluation by radiographers in Australia, due to their methodological limitations. Exploring potential barriers when implementing a new clinical initiative is imperative,40 yet the barriers and enablers of systems such as PIE have been sparsely explored within Australia over the last 20 years. The common barriers to development of radiographer image evaluation were a perception of a lack of education and both radiographer and radiology perception and support.3, 24, 28, 29, 30, 31, 32 Education acts as both a barrier and an enabler, suggesting the development of systems such as PIE hinge on addressing access to education at a national level. Tailored training and educational interventions, in line with the current literature,36 and provision of relevant continuous professional development (CPD) resources to better assist radiographers maintain performance, are required. The use of Internet‐based learning tools similar to the eLearning systems in the UK41 may be worthy of consideration. The literature also suggests that the identified barrier around lack of radiologist and fellow radiographer support may stem from the inconsistent use of terminology for radiographer image interpretation in the literature. Such terminological inconsistencies could have raised concern amongst radiologists in terms of radiographers providing a diagnostic report. The definition used in the early research is cloudy, particularly regarding the terms ‘reporting’ and ‘commenting’. The language used in the Cook et al. workplace trial of radiographer reporting22 suggested radiographers were providing a diagnostic report in the ‘trial’ setting; however, the experimental design is that of a second ‘radiological impression’ compared to the radiologist report. The use of the term reporting as an interchangeable term to commenting is also evident in another study where informal, verbal comments are referred to as ‘verbal plain film reporting’.24 These findings suggest that a universal term that clearly defines the accurate meaning of radiographer commenting be adopted. A recent paper ‘describing the strategies that Australian rural radiographers use for communication of their radiographic opinion to the referring doctor’ provides a better context of radiographic image interpretation,32 whereby a comment is notably different to a diagnostic report (a role held exclusively by radiologists in Australia). In 2018, the Royal Australian and New Zealand College of Radiologists (RANZCR) issued a position statement titled ‘Image Interpretation by Radiographers – Not the Right Solution’, 42 formally opposing the implementation of radiographer image evaluation systems such as PIE in any such setting within Australia. This opposition may have been influenced by employing an inadequate search strategy that may have utilised the inconsistencies in terminology observed in the texts. Interestingly, the position statement did not make reference to, or explore any of the 19 studies covered in this review. Drawing from opinion pieces and studies conducted overseas, the statement demonstrates an ill‐informed understanding of the terms utilised in the research and the purpose of PIE. It is likely that the interprofessional barrier could be overcome with a less aggressive tone in conjunction with a clear, universal definition of ‘radiographer commenting’/ PIE. Furthermore, future studies should aim to assess the performance of a PIE system in clinical practice. Results of these types of studies may alleviate the concerns held by the RANZCR. Two studies31, 32 reported that radiographers have concerns regarding the potential medico‐legal ramifications of radiographer image evaluation system such as the PIE. However, it is noteworthy to consider that medical litigation may arise in all steps of the medical imaging pathway, from poor‐quality imaging to an inaccurate communication of findings.43, 44 For example, two recent coroners’ findings45, 46 found radiographers could have played a more active role in the medical imaging team by communicating the findings to the referring clinician, potentially avoiding the deaths of two patients. The MRPBA ‘professional capabilities for medical radiation practice’11 statement clearly states that radiographers must convey information to the referrer when an unexpected or urgent finding is noted. Furthermore, it could be interpreted that a radiographer who does not participate in a PIE system or similar may be in breach of their professional registration. There are several strengths and limitations of this review that are worthy of consideration. The review conducted was comprehensive and provides major insights into performance of radiographer image evaluation and barriers and enablers to its implementation. A further strength is the thorough methodological approach to the review, including a balanced and transparent approach to study selection and quality evaluation. This approach mitigated selection bias and ensured only quality studies were included in the review. The McMaster critical appraisal tool utilised has limitations due to the fact it was mildly modified and was not retested for reliability and validity. Another limitation was that the majority of studies reviewed relied on voluntary cohorts for assessment of radiographer performance. This voluntary bias may affect the validity of the results and should be taken into consideration when interpreting the findings of this review. The substantial variation in study designs is of importance and warrants consideration. The methodological variations across studies limited the ability to pool data for analysis.

Conclusion

Findings from this review indicate that Australian radiographers can undertake radiographic abnormality detection and PIE; however, educational and clinical support barriers limit the implementation of radiographer image evaluation systems. Access to targeted educational resources and a clear definition of radiographers’ image evaluation role may drive a wider acceptance of radiographer image evaluation in Australia. Moving forward, the literature will benefit from well‐designed projects that assess radiographers’ image evaluation performance in the clinical setting.
  25 in total

Review 1.  Bias in plain film reading performance studies.

Authors:  S Brealey; A J Scally
Journal:  Br J Radiol       Date:  2001-04       Impact factor: 3.039

2.  Development of a valid and reliable test to assess trauma radiograph interpretation performance.

Authors:  M J Neep; T Steffens; V Riley; P Eastgate; S M McPhail
Journal:  Radiography (Lond)       Date:  2017-02-12

3.  Reducing errors in the accident department: a simple method using radiographers.

Authors:  L Berman; G de Lacey; E Twomey; B Twomey; T Welch; R Eban
Journal:  Br Med J (Clin Res Ed)       Date:  1985-02-09

4.  The effect of a chest imaging lecture on emergency department doctors' ability to interpret chest CT images: a randomized study.

Authors:  Gerben Keijzers; Vasugi Sithirasenan
Journal:  Eur J Emerg Med       Date:  2012-02       Impact factor: 2.799

5.  Radiographer commenting of trauma radiographs: a survey of the benefits, barriers and enablers to participation in an Australian healthcare setting.

Authors:  Michael J Neep; Tom Steffens; Rebecca Owen; Steven M McPhail
Journal:  J Med Imaging Radiat Oncol       Date:  2014-04-29       Impact factor: 1.735

6.  Evaluating the true clinical utility of the red dot system in radiograph interpretation.

Authors:  Nicholas Brown; Paul Leschke
Journal:  J Med Imaging Radiat Oncol       Date:  2012-07-03       Impact factor: 1.735

7.  Barriers to participation in medical research from the perspective of researchers.

Authors:  Reza Safdari; Hamideh Ehtesham; Mehri Robiaty; Narges Ziaee
Journal:  J Educ Health Promot       Date:  2018-02-09

8.  An investigation into the use of radiographer abnormality detection systems by Queensland public hospitals.

Authors:  Andrew Murphy; Michael Neep
Journal:  J Med Radiat Sci       Date:  2018-04-29

9.  Evaluating the effectiveness of intensive versus non-intensive image interpretation education for radiographers: a randomised controlled trial.

Authors:  Michael J Neep; Tom Steffens; Patrick Eastgate; Steven M McPhail
Journal:  J Med Radiat Sci       Date:  2018-11-09

10.  Factors influencing the development and implementation of advanced radiographer practice in Australia - a qualitative study using an interpretative phenomenological approach.

Authors:  Barbara A Page; Maree Bernoth; Rob Davidson
Journal:  J Med Radiat Sci       Date:  2014-07-16
View more
  2 in total

1.  Computed tomography technologist notes in PACS to radiologists: what are they telling us and how does it increase value?

Authors:  Corey T Jensen; Sanaz Javadi; Priya Bhosale; Ahmed W Moawad; Mohammed Saleh; Dhakshinamoorthy Ganeshan; Ajaykumar C Morani
Journal:  Abdom Radiol (NY)       Date:  2021-02-07

2.  Performance and educational training of radiographers in lung nodule or mass detection: Retrospective comparison with different deep learning algorithms.

Authors:  Pai-Hsueh Teng; Chia-Hao Liang; Yun Lin; Angel Alberich-Bayarri; Rafael López González; Pin-Wei Li; Yu-Hsin Weng; Yi-Ting Chen; Chih-Hsien Lin; Kang-Ju Chou; Yao-Shen Chen; Fu-Zong Wu
Journal:  Medicine (Baltimore)       Date:  2021-06-11       Impact factor: 1.817

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.