Literature DB >> 33305071

Impact of Simulation-Based Training on Radiation Therapists' Workload, Situation Awareness, and Performance.

Lukasz M Mazur1,2,3, Robert Adams1, Prithima R Mosaly1,2,3, Marjorie P Stiegler4, Joseph Nuamah1, Karthik Adapa1,3, Bhishamjit Chera1, Lawrence B Marks1.   

Abstract

PURPOSE: This study aimed to assess the impact of simulation-based training intervention on radiation therapy therapist (RTT) mental workload, situation awareness, and performance during routine quality assurance (QA) and treatment delivery tasks. METHODS AND MATERIALS: As part of a prospective institutional review board-approved study, 32 RTTs completed routine QA and treatment delivery tasks on clinical scenarios in a simulation laboratory. Participants, randomized to receive (n = 16) versus not receive (n = 16) simulation-based training had pre- and postintervention assessments of mental workload, situation awareness, and performance. We used linear regression models to compare the postassessment scores between the study groups while controlling for baseline scores. Mental workload was quantified subjectively using the NASA Task Load Index. Situation awareness was quantified subjectively using the situation awareness rating technique and objectively using the situation awareness global assessment technique. Performance was quantified based on procedural compliance (adherence to preset/standard QA timeout tasks) and error detection (detection and correction of embedded treatment planning errors).
RESULTS: Simulation-based training intervention was associated with significant improvements in overall performance (P < .01), but had no significant impact on mental workload or subjective/objective quantifications of situation awareness.
CONCLUSIONS: Simulation-based training might be an effective tool to improve RTT performance of QA-related tasks.
© 2020 The Author(s).

Entities:  

Year:  2020        PMID: 33305071      PMCID: PMC7718555          DOI: 10.1016/j.adro.2020.09.008

Source DB:  PubMed          Journal:  Adv Radiat Oncol        ISSN: 2452-1094


Introduction

Our field of Radiation Oncology enjoys a strong tradition of proactively and innovatively addressing evolving patient safety and quality assurance (QA) challenges.1, 2, 3, 4, 5, 6, 7, 8 Radiation therapy therapists (RTTs) play a critical role in assuring safety and quality because they are the last line of defense to catch any upstream errors (eg, from treatment planning). Nevertheless, there has been a dramatic shift in the manner in which RTTs perform their work, which may affect their performance and thus patient safety. For example, traditional treatment approaches were conducive to an active hands-on mindset (eg, checking the light field or measurements on the patient’s skin, ensuring that the field size, beam orientation/shape, and monitor units are appropriate for the particular patient/target). With newer treatment approaches, RTTs perform QA efforts in a somewhat passive manner, more separated from the patient, (eg, reviewing computer-generated data), highlighting the need for innovative interventions to continue to ensure patient safety. Simulation-based training has been widely used across the industry and several areas within health care to help workers cope with a variety of challenging circumstances to enhance safety.10, 11, 12, 13, 14, 15, 16, 17, 18, 19 Recently, Mazur et al. developed and tested simulation-based training for radiation oncologists, dosimetrists, and physicists during the treatment planning process (activities that are essentially all upstream from the RTTs’ efforts), and found that simulation-based training improved radiation therapy planners’ QA procedural performance. Herein, we report on the results of our initiative to assess the impact of simulation-based training on RTTs’ mental workload, situation awareness, and performance during routine QA and treatment delivery tasks.

Method and Materials

Subjects and setting

Thirty-two RTTs from 2 large academic institutions participated in this institutional review board–approved prospective study (21 women; 11 men). RTTs were block-randomized to receive (n = 16) versus not receive (n = 16) simulation-based training (Fig. 1). Based on prior studies, 16 participants per study group was determined to detect relatively modest (approximately 5%) changes in our study endpoints (mental workload, situation awareness, performance), assuming the use of a 2-group t test with 2-sided alpha of .05 and 80% power.
Figure 1

Overall study design.

Overall study design. Participants received $100 gift cards for completing pre- and postintervention assessments. Participants also received continuing education credits because our simulation-based training was approved by the American Society of Radiologic Technologists (reference number: NCZ0218011F). Simulated assessments were performed in our human factors laboratory using an emulator and workstation that closely replicated RTT’s typical working environment. Before any formal assessments were made, each participant received training to ensure they could effectively navigate the record and verify software (Mosaiq) and operate the emulator equipment (eg, control panel and treatment delivery displays). During the last portion of the training, each participant was asked to perform a series of simple tasks to confirm their proficiency with the equipment and treatment delivery software (eg, opening and closing patient charts, locating specific information [name, data of birth, shifts, special instructions], starting and ending [in case of emergency] treatment).

Preassessment

Each RTT was asked to complete 2 routine QA and treatment delivery scenarios while working with a second therapist, an actor who was instructed and trained to conduct the assessments. Each scenario included 1 embedded error. The primary reasons for having 2 scenarios were to avoid the likelihood of a suboptimal or optimal performance associated with only 1 scenario and control for the total time that participants would spend in our laboratory, especially because participants randomized to our intervention were scheduled for 1-hour simulation-based training after the preassessment. Participants had access via phone to a dosimetrist, physicist, and radiation oncologist (all represented by actors [radiation therapy professionals]; all aware of the embedded errors) for any questions or issues as needed. The simulated scenarios involved the completion of comprehensive timeouts before treatments and delivery of treatments to patients. There was no time limit to complete the scenarios.

Intervention: Simulation-based training

The simulation-based training was based on the principles for effective design and use of simulation-based training and debriefing session proposed by Mazur et al., which in turn was guided by a set of validated approaches across other health care disciplines. The general dimensions of our simulation-based training are presented in Table 1. Immediately after the preassessments, participants randomized to receive the simulation-based training spent approximately 1 hour conducting a variety of simulated scenarios with variable levels of difficulty requiring them to complete timeouts and deliver treatments to patients while communicating (as needed) via phone with dosimetrists, physicists, and radiation oncologists (all represented by actors [radiation therapy professionals]) regarding possible issues or errors. Participants were also instructed to properly document (as needed) any issues or errors they found in the record and verify software.
Table 1

Dimensions and description of simulation-based training

DimensionsDescription

Aims and purpose of simulation activity

Assess impact of simulation-based training on radiation therapy therapist mental workload, situational awareness, and performance

Unit of participation

Individual

Health care domain

Radiation oncology

Professional discipline of participants

Radiation therapists

Type of knowledge, skills, attitudes, or behavior addressed

Deliver treatment to patients, complete comprehensive timeouts, communicate with other care team members, document errors

Technology applicable or required

Emulator equipment (control panel, treatment delivery displays)

Site of simulation

Laboratory setting

Extent of direct participation

Highly interactive with significant, direct, on-site, hands-on participation

Method of feedback used

Debriefing

Experience level of participants

Novice to expert

Simulated patient age

Only adult patients
Dimensions and description of simulation-based training Aims and purpose of simulation activity Unit of participation Health care domain Professional discipline of participants Type of knowledge, skills, attitudes, or behavior addressed Technology applicable or required Site of simulation Extent of direct participation Method of feedback used Experience level of participants Simulated patient age The simulation-based training included a subsequent standardized 1-on-1 debriefing session led by an experienced educator (former therapist/dosimetrist, member of the research team, and trained by a simulation-based expert; also, member of our research team; Table 2 shows the broad safety concepts included in our debriefing sessions as proposed by Mazur et al,) on how to conduct these debriefing sessions.21, 22, 23, 24, 25, 26, 27 Specifically, based on the recommendations by Mazur et al, each debriefing session started with a review of the importance of comprehensive timeout concepts and elements, the need for communication and resolution of any errors before treatment, and proper documentation of decision and actions on any encountered errors, followed by the actual review and discussion of participants’ mental workload, situation awareness, and performance. The overarching goal of the debriefing session was to help participants better appreciate how safety mindfulness during timeout and treatment procedures can enhance patient safety and protect them from unintended human errors resulting from suboptimal mental workload (eg, rushing, distractions) and reduced situation awareness (eg, complex information, vague communication, documentation, and patient movement). After the debriefing session, each participant was given an opportunity to conduct additional simulated scenarios to further practice and reflect on procedural compliance with timeouts, including error communication and documentation procedures.
Table 2

Summary of safety concepts used during simulation-based training,

Safety conceptConceptual frameworkKey teaching points
Limitations of human cognitive capabilitiesInformation processing theoryRelationships between task demands, workload, situation awareness, and human performance
Interactive complexity of radiation therapy systemsNormal accident theoryRelationship between system characteristics (eg, complexity, stressors, interface usability), quality assurance system design (eg, checklists, timeouts, automation-supported quality assurance), and human performance
Suboptimal communication and documentationSwiss-cheese modelRelationship between latent failures in communication and documentation (eg, suboptimal notes, vague instructions, errors in prescriptions) and human performance
Summary of safety concepts used during simulation-based training, For consistency purposes, participants randomized to the control group (without simulation-based training) were also given an opportunity to conduct additional simulated scenarios. They were given a 30- to 45-minute break after the preassessment to somewhat align the timing of the completion of the overall preassessment process with the intervention group.

Postassessments

For all participants (intervention and control group), the interval between the pre- and postassessments was approximately 30 days to ensure consistency in measurement. Similar to the preassessments, each participant was instructed to complete 2 routine QA and treatment delivery scenarios while working with a second therapist (actor [radiation therapy professional]). The scenarios involved new patients, QA checks, and treatments (when comparing with preintervention assessments), with each scenario including 1 embedded error. Participants also had access via phone to a dosimetrist, physicist, and radiation oncologist (all actors [radiation therapy professionals]) for any questions or issues as needed. The scenarios involved completion of comprehensive timeouts before the treatment and delivery of treatments to patients. Participants were given no time limit to complete the scenarios. During the pre- and postintervention assessments as described, participant performance was observed and assessed in real time using paper-based forms by researchers from an adjacent control room through a 1-way window and by the actor RTT. Audio and video data were recorded using a camera located on the eye tracking glasses (Sensomotoric Instruments) worn by participants throughout the assessments. The audio and video data were used by the researcher to aid the assessment of performance from the control room. A multidisciplinary team consisting of dosimetry and therapy educators, with input from human factors engineers, designed these scenarios and embedded errors. The embedded errors were carefully constructed to be rare but realistic (eg, missing pacemaker in the Mosaiq assessment with a note to physicist to place mosfets, incorrect multileaf collimator field shape, incorrect/high machine units, incorrect labeled treatment site and field name), with the expressed intent to make them somewhat challenging to detect.

Data collection

Quantification of subjective workload

At the end of each simulated scenario, participants completed the NASA Task Load Index (NASA-TLX) questionnaire. The NASA-TLX considers 6 dimensions of workload (mental, physical, temporal demands, frustration, effort, and performance) and is widely considered to be a valid and reliable subjective measure of mental workload.28, 29, 30 The NASA-TLX requires participants to score each dimension on a 10-point rating scale (0 = low; 10 = high) based on their performance of the task under analysis and perform 15 pairwise comparisons between the dimensions to derive their relative weighs. The ratings are combined to calculate a measure of participants’ mental workload as a composite score ranging from 0 (low mental workload) to 100 (high mental workload).

Quantification of situation awareness

At the end of each simulated scenario, participants completed the situation awareness rating technique (SART) questionnaire. The SART has been validated in many domains, including aviation; nuclear power plants, and health care. There are 10 dimensions measuring operator situational awareness via 3 areas: Demands on attentional resources (D dimensions: Likeliness of situation to change suddenly; number of variables that require attention; degree of complication of situation), supply of attentional resources (S dimensions: Degree that one is ready for activity; amount of mental ability available for new variables; degree that one’s thoughts are brought to bear on the situation; amount of division of attention in the situation), and understanding of the situations (U dimensions: Amount of knowledge received and understood; degree of goodness of value of knowledge communicated; degree of acquaintance with situation experience). The SART involves participants subjectively rating each dimension on a 7-point rating scale (1 = low; 7 = high) based on their performance of the task under analysis. The ratings are combined (U – [D – S]) to calculate a subjective measure of participants’ situation awareness as a composite score ranging from 0 (low situation awareness) to 46 (high situation awareness). We also collected objective data on situation awareness using the situation awareness global assessment technique (SAGAT)., The SAGAT is a probing technique developed based on an information-processing theory and is one of the most widely used objective measures of situation awareness with a high degree of validity and reliability. During each simulated scenario while treating the patient, participants were asked 2 probing questions representing 2 different levels of situation awareness (eg, what side of the patient is the gantry right now [perception level]? Does this patient have a pacemaker [comprehension level]?) by the second therapist [actor/radiation therapy professional]). All participants were given up to 10 seconds to respond to each question. Correct versus incorrect responses were marked as 1 and 0, respectfully, and then averaged to form a composite score.

Quantification of performance

To arrive at the overall performance score (range, 0-100), we averaged the scores from procedural compliance to timeout components and error detection. For procedural compliance with timeout components (range, 0-100), RTTs were expected to complete a standard timeout to ensure the best clinical decisions and detect possible errors. For each participant, the procedural compliance score represented the number of relevant timeout components not missed (ie, conducted properly) divided by the total relevant timeout components. For error detection (range, 0-100), RTTs were expected to detect embedded errors, including proper follow-up communication and documentation of these errors. Therefore, for each simulated scenario, each participant received a 0 versus 100 score for each properly performed action (detection, communication, and documentation of errors), summed together and divided by 3. Each scenario included 1 embedded error.

Statistical analysis

At each timepoint (before and after), the scores for NASA-TLX, SART, and overall performance were similar between the 2 scenarios that were assessed; thus, the scores were averaged across the scenarios at each timepoint. Linear regression models were fit to assess the difference in postintervention assessment scores (NASA-TLX, SART, and overall performance) while controlling for the baseline scores between RTTs who did and did not receive simulation-based training. For the NASA-TLX, SART, and performance individual dimensions, linear models were fit identically to the composite scores as previously described. P values <.05 were defined as statistically significant. The analyses were performed using SAS, version 9.4 (Cary, NC).

Results

Descriptive statistics (mean and standard deviations) of pre- and postintervention scores of mental workload, situation awareness, and performance are provided in Table 3.
Table 3

Descriptive statistics on mental workload (global NASA-TLX), SART, overall performance, procedural compliance with timeout, and error detection scores

MeasuresWithout simulation-based training
With simulation-based training
Preassessment,mean (SD)Postassessment,mean (SD)Preassessment,mean (SD)Postassessment,mean (SD)
Mental Workload
 Global NASA-TLX score33 (21)33 (24)40 (15)39 (18)
 Mental demand40 (14)40 (28)48 (22)45 (21)
 Physical demand12 (13)16 (13)20 (21)18 (13)
 Temporal demand19 (18)23 (19)32 (16)30 (21)
 Performance35 (27)31 (20)36 (29)40 (30)
 Effort32 (20)35 (26)41 (22)41 (22)
 Frustration24 (26)28 (26)32 (24)30 (23)
Situation awareness
 SART composite score24 (7)23 (4)23 (6)22 (5)
 Demands on attentional resources11 (4)11 (3)9 (4)10 (3)
 Supply of attentional resources17 (6)16 (3)16 (3)17 (3)
 Understanding of the situations11 (2)11 (2)11 (2)(2)
Performance
 Overall30 (29)31 (30)42 (35)65 (35)
 Timeout58 (28)56 (33)59 (28)75 (29)
 Error detection20 (35)23 (33)36 (42)62 (44)

NASA-TLX, NASA Task Load Index; SART, situation awareness rating technique; SD, standard deviation

SAGAT scores are not included because participants correctly responded to 100% of probes.

Statistically significant (P < .05)

Descriptive statistics on mental workload (global NASA-TLX), SART, overall performance, procedural compliance with timeout, and error detection scores NASA-TLX, NASA Task Load Index; SART, situation awareness rating technique; SD, standard deviation SAGAT scores are not included because participants correctly responded to 100% of probes. Statistically significant (P < .05)

Subjective workload (NASA-TLX)

No significant differences in the postintervention assessment scores were noted between RTTs who did and did not receive simulation-based training on the NASA-TLX scores (composite or individual dimensions; Table 3).

Situational awareness (SART and SAGAT)

No significant differences in the postintervention assessment scores between RTTs who did and did not receive simulation-based training were noted for the SART (composite or areas [S, D, and U]) and SAGAT scores; Table 3). The only significant deference was noted in the SART subscore related to concentration, with an adjusted difference of 0.7 (95% confidence interval [CI], 0.02-1.4) points higher in the intervention group than the control group (P = .04). Participants in both groups (intervention and control) were able to correctly answer all SAGAT probing questions; thus, scoring 100% across all assessments.

Performance

On average, participants randomized to the simulation-based training had an overall postintervention performance score adjusted difference of 29.5 (95% CI, 10.12-48.96) points higher than participants in the control group (P < .01; Fig. 2). Training-group participants also had an average error detection score adjusted difference of 35.5 (95% CI, 12.34-58.62) points higher than the control group (P < .01; Fig. 3). Procedural compliance with timeout component scores were not statistically significant (P = .07); however, scores were numerically higher in the simulation-based training group (Fig. 4).
Figure 2

Participants randomized to simulation-based training showed improvements in overall performance scores compared with participants in the control group (P < .01).

Figure 3

Participants randomized to simulation-based training showed improvements in error detection, communication, and documentation scores compared with participants in the control group (P < .01).

Figure 4

Participants randomized to simulation-based training showed trending improvements (not significant) in procedural compliance with timeouts.

Participants randomized to simulation-based training showed improvements in overall performance scores compared with participants in the control group (P < .01). Participants randomized to simulation-based training showed improvements in error detection, communication, and documentation scores compared with participants in the control group (P < .01). Participants randomized to simulation-based training showed trending improvements (not significant) in procedural compliance with timeouts.

Discussion

We hypothesized that our simulation-based training could affect the perception of mental workload (eg, scoring higher on the NASA-TLX due to recognition and respect for experienced mental workload, or scoring lower on the NASA-TLX due to training sessions) and situation awareness (eg, scoring higher or lower on SART due to more attention to detail or more concentration on task; correctly vs. incorrectly responding to SAGAT probes). The only significant difference was noted in the concentration dimension of the SART subscore, suggesting perhaps some level of improved cognitive attention during the simulated scenarios. There were no statistically significant differences in composite mental workload and situation awareness scores between the study groups, suggesting that our simulation-based training did not have a large effect on the perceptions of these measures. The composite NASA-TLX and SART scores were relatively low and virtually unchanged across the assessments (Table 3). This itself is an important finding, suggesting that RTTs perceived our simulated scenarios to be within the acceptable ranges of mental workload and situation awareness (NASA-TLX < 41;, SART < 25,) where performance degradation should not be expected, which is somewhat confirmed by the 100% correct responses to probing questions (SAGAT). Overall, these results are consistent with those by Mazur et al., who found that simulation-based training had no significant impact on mental workload of radiation oncologists, physicists, and dosimetrists during treatment planning. On the other hand, across other medical domains, simulation-based training is often associated with improvements in mental workload and situation awareness.,, Thus, we acknowledge that our simulation-based training was ineffective in producing large changes and that perhaps more training time or different training methods with a more diverse set of simulated scenarios are needed to affect RTT perceptions of mental workload and situation awareness during routing QA and treatment delivery tasks. Second, we found that our simulation-based training significantly improved overall performance, especially in error detection score (combination of detection, communication, and documentation), with modest improvements in timeout scores. This is encouraging because procedural compliance with timeouts and detection of errors remains a critical component of QA systems in RT.40, 41, 42, 43, 44 Specifically, RTTs who completed the simulation-based training accurately detected an error, and contacted and consulted about the error with appropriate professional staff members depending on the error type. For example, when RTTs detected an embedded error regarding an incorrect assessment of pacemaker in the Mosaiq software, they contacted both the physicist and radiation oncologist to confirm that the patient has the pacemaker and that the physicist performed the expected additional tasks related to this, documented information appropriately, and were willing to report these errors to the in-house incident reporting system, which all represented a positive effect of our simulation-based training on RTT safety mindfulness. This result is also consistent with that by Mazur et al, who found that simulation-based training had a significant impact on the procedural performance of radiation oncologists, physicists, and dosimetrists during treatment planning. The current study has several limitations. First, the results are based on a limited number of assessments with a limited number of RTTs from only 2 academic hospitals. Given the time commitment needed to conduct simulation-based training with busy RTTs, a relatively modest number of participants is often common in these types of studies. Second, some RTTs were possibly less familiar with the assigned routine QA and delivery techniques; thus, similarity (or lack thereof) could have played a role in how RTTs approached the scenarios. Nonetheless, all scenarios were considered routine and all RTTs were familiar with the QA procedures for simulated scenarios. Third, because the 2 study groups were randomized, experience may have affected our results. Unfortunately, we did not formally collect data on years of experience; yet, we gathered that most RTTs participating in our study had at least 3 years of experience. Fourth, not all environmental conditions were replicated in our laboratory (eg, interruptions that may occur in real-world practice). We allowed participants to listen and respond to clinical pages as needed because our laboratory was exposed to overhead announcements. Also, our simulation setting did not fully duplicate a real-world clinical setting (eg, no patient on the treatment table). We informed RTTs about the possible limitations of our setting, and asked RTTs to consider our simulated scenarios as seriously as they would be conducted in the real-world clinical environment. Fifth, fatigue from clinical work could have affected RTTs’ mental workload, situation awareness, and performance. To control for fatigue levels, we asked RTTs to subjectively evaluate their own state of fatigue right before the experiments using the fatigue portion of the Crew Status Survey. This survey required each RTT to circle an applicable option in the list that describes how they feel right now, with 0 representing no fatigue and 10 complete fatigue. There were no statistical differences between the pre- and postintervention assessments and between the control and intervention groups, indicating a relatively low level of fatigue. Sixth, 8 RTTs from each study group (intervention vs control) were also part of another study aim where they were further randomized to the neurofeedback intervention (training to improve self-regulation over cognitive skills) between pre- and postassessments, which may have influenced these results. Thus, we reran our analysis while adjusting for neurofeedback and confirmed that all our results regarding simulation-based training remained virtually unchanged while indicating that RTTs assigned to simulation-based training and neurofeedback intervention indicated further improvements in performance. The specific results of the neurofeedback intervention study will be reported in future manuscripts. Finally, we recognize that the simulation-based assessments themselves may have produced apprehension that could impact participants’ mental workload, situation awareness, and performance. Such biases can occur in most simulation-based research. To moderate this effect, all RTTs were informed (by consent form and verbally by researchers) that participation is voluntary, they had the right to decline participation at any point in time, and their results would remain confidential. None of the RTTs dropped out of the study.

Conclusions

Based on our findings, continuing to research the utility of simulation-based training among RTT professionals seems rational. If deemed effective, simulation-based training could be endorsed by recognized societies and associations, such as the American Society of Radiologic Technologists, and supported by commercial vendors as part of training packages. At the clinic level, simulation-based training focused on RTT procedural compliance with timeouts, and treatment delivery tasks could become part of the regular education curricula for RTTs and students. This could be done in controlled settings with fake plans and embedded errors, and clinical supervisors acting as mentors. Such training could enhance RTT safety mindfulness, which could help avoid serious errors.46, 47, 48, 49
  32 in total

1.  Identifying best practice guidelines for debriefing in surgery: a tri-continental study.

Authors:  Maria Ahmed; Nick Sevdalis; John Paige; Ram Paragi-Gururaja; Debra Nestel; Sonal Arora
Journal:  Am J Surg       Date:  2012-04       Impact factor: 2.565

2.  The role of cognitive and learning theories in supporting successful EHR system implementation training: a qualitative study.

Authors:  Ann Scheck McAlearney; Julie Robbins; Nina Kowalczyk; Deena J Chisolm; Paula H Song
Journal:  Med Care Res Rev       Date:  2012-03-26       Impact factor: 3.929

Review 3.  Simulation-based assessment in anesthesiology: requirements for practical implementation.

Authors:  John R Boulet; David J Murray
Journal:  Anesthesiology       Date:  2010-04       Impact factor: 7.892

Review 4.  The role of debriefing in simulation-based learning.

Authors:  Ruth M Fanning; David M Gaba
Journal:  Simul Healthc       Date:  2007       Impact factor: 1.929

5.  Decision-making and cognitive strategies.

Authors:  Marjorie P Stiegler; David M Gaba
Journal:  Simul Healthc       Date:  2015-06       Impact factor: 1.929

6.  Consensus recommendations for incident learning database structures in radiation oncology.

Authors:  E C Ford; L Fong de Los Santos; T Pawlicki; S Sutlief; P Dunscombe
Journal:  Med Phys       Date:  2012-12       Impact factor: 4.071

7.  The challenge of maximizing safety in radiation oncology.

Authors:  Lawrence B Marks; Marianne Jackson; Liyi Xie; Sha X Chang; Katharin Deschesne Burkhardt; Lukasz Mazur; Ellen L Jones; Patricia Saponaro; Dana Lachapelle; Dee C Baynes; Robert D Adams
Journal:  Pract Radiat Oncol       Date:  2011-01-14

8.  RO-ILS: Radiation Oncology Incident Learning System: A report from the first year of experience.

Authors:  David J Hoopes; Adam P Dicker; Nadine L Eads; Gary A Ezzell; Benedick A Fraass; Theresa M Kwiatkowski; Kathy Lash; Gregory A Patton; Tom Piotrowski; Cindy Tomlinson; Eric C Ford
Journal:  Pract Radiat Oncol       Date:  2015-06-25

9.  Cognitive load predicts point-of-care ultrasound simulator performance.

Authors:  Sara Aldekhyl; Rodrigo B Cavalcanti; Laura M Naismith
Journal:  Perspect Med Educ       Date:  2018-02

10.  The effect of repeated full immersion simulation training in ureterorenoscopy on mental workload of novice operators.

Authors:  Takashige Abe; Faizan Dar; Passakorn Amnattrakul; Abdullatif Aydin; Nicholas Raison; Nobuo Shinohara; Muhammad Shamim Khan; Kamran Ahmed; Prokar Dasgupta
Journal:  BMC Med Educ       Date:  2019-08-22       Impact factor: 2.463

View more
  1 in total

1.  Pathway for radiation therapists online advanced adapter training and credentialing.

Authors:  Meegan Shepherd; Siobhan Graham; Amy Ward; Lissane Zwart; Bin Cai; Charlotte Shelley; Jeremy Booth
Journal:  Tech Innov Patient Support Radiat Oncol       Date:  2021-12-08
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.