Literature DB >> 19633886

Validation and implementation of surgical simulators: a critical review of present, past, and future.

B M A Schout1, A J M Hendrikx, F Scheele, B L H Bemelmans, A J J A Scherpbier.   

Abstract

BACKGROUND: In the past 20 years the surgical simulator market has seen substantial growth. Simulators are useful for teaching surgical skills effectively and with minimal harm and discomfort to patients. Before a simulator can be integrated into an educational program, it is recommended that its validity be determined. This study aims to provide a critical review of the literature and the main experiences and efforts relating to the validation of simulators during the last two decades.
METHODS: Subjective and objective validity studies between 1980 and 2008 were identified by searches in Pubmed, Cochrane, and Web of Science.
RESULTS: Although several papers have described definitions of various subjective types of validity, the literature does not offer any general guidelines concerning methods, settings, and data interpretation. Objective validation studies on endourological simulators were mainly characterized by a large variety of methods and parameters used to assess validity and in the definition and identification of expert and novice levels of performance.
CONCLUSION: Validity research is hampered by a paucity of widely accepted definitions and measurement methods of validity. It would be helpful to those considering the use of simulators in training programs if there were consensus on guidelines for validating surgical simulators and the development of training programs. Before undertaking a study to validate a simulator, researchers would be well advised to conduct a training needs analysis (TNA) to evaluate the existing need for training and to determine program requirements in a training program design (TPD), methods that are also used by designers of military simulation programs. Development and validation of training models should be based on a multidisciplinary approach involving specialists (teachers), residents (learners), educationalists (teaching the teachers), and industrial designers (providers of teaching facilities). In addition to technical skills, attention should be paid to contextual, interpersonal, and task-related factors.

Entities:  

Mesh:

Year:  2009        PMID: 19633886      PMCID: PMC2821618          DOI: 10.1007/s00464-009-0634-9

Source DB:  PubMed          Journal:  Surg Endosc        ISSN: 0930-2794            Impact factor:   4.584


Validation of surgical simulators in the last two decades

While simulation and simulators have a long history in training programs in various domains, such as the military and aviation, their appearance on the scene of surgical training is more recent [1]. Simulators offer various important advantages over both didactic teaching and learning by performing procedures in patients. They have been shown to prevent harm and discomfort to patients and shorten learning curves, the latter implying that they also offer cost benefits [2-11]. They are tailored to individual learners, enabling them to progress at their own rate [6]. Additionally, learning on simulators in a skillslab environment allows learners to make mistakes. This is important considering that learning from one’s errors is a key component of skills development [4, 8, 11]. Apart from their worth as training instruments, simulators can also be valuable for formative and summative assessment [3, 6, 12] because they enable standardized training and repeated practice of procedures under standardized conditions [13]. These potential benefits are widely recognized and there is considerable interest in the implementation of simulators in training programs. It is also generally accepted, however, that simulators need to be validated before they can be effectively integrated into educational programs [5, 6, 14, 15]. Validation studies address different kinds of validity, such as “face,” “content,” “expert,” “referent,” “discriminative,” “construct,” “concurrent,” “criterion,” and/or “predictive” validity. There is no uniformity in how these types of validity are defined in different papers [15-18]. Additionally, a literature search failed to identify any description of guidelines on how to define and measure different types of validity. Nevertheless, most papers report positive results in respect of all kinds of validity of various simulators. However, what do these results actually reflect? This paper is based on a review of the literature and the main experiences and efforts relating to the validation of simulators during the last two decades. Based on these, suggestions are made for future research into the use of simulators in surgical skills training.

Terminology of validation

What exactly is validation and what types of validity can be distinguished? There is general agreement in the literature that a distinction can be made between subjective and objective approaches to validation [15-18]. Subjective approaches examine novices’ (referents’) and/or experts’ opinions, while objective approaches are used in prospective experimental studies. Face, content, expert, and referent validity concern subjective approaches of validity. These types of validity studies generally require experts (usually specialists) and novices (usually residents or students) to perform a procedure on a simulator, after which both groups are asked to complete a questionnaire about their experience with the simulator. Objective approaches concern construct, discriminative, concurrent, criterion, and predictive validity, and these studies generally involve experiments to ascertain whether a simulator can discriminate between different levels of expertise or to evaluate the effects of simulator training (transfer) by measuring real-time performance, for example, on a patient, cadaver or a substitute real-time model.

Subjective approaches to validity (expert and novice views)

A literature search for guidelines on face and content validity yielded several definitions of validity [15-18] but no guidelines on how it should be established. As illustrated in Table 1, studies on face and content validity have used rather arbitrary cutoff points to determine the appropriateness and value of simulators [16, 19–24]. The variety in scales and interpretations in the literature suggests a lack of consensus regarding criteria for validity.
Table 1

Methods used to quantify and interpret face and content validity

Type of questionnaireDef. by no. of procedures expert (E)/novice (N)Research settingInterpretationCutoff point
Yes/no questions on realism and appropriateness [20]

E: >30/year (specialists)

N: <30/year (specialists or industry employees)

Conference/course% of yes/no answersNo actual cutoff point
Four-point Likert scale on realism and usefulness [21]

E: >1,000

Experienced: 200–1,000

Intermediate experienced: <200

N: no endoscopy experience (interns, residents or specialists)

Conference/course + in a hospital during daily work time

Realism:

1 = very unrealistic

4 = very realistic

Usefulness: not mentioned

2.95 = “good”

2.57 = “doubtful”

Five-point ordinal scale on realism and usefulness [22]

E: >100

N: ≤100 (surgeons and residents)

Conference

1 = not realistic/good/useful

5 = very realistic/good/useful

>3.5 “favorable”, 4.0 “quite well”
Five-point Likert scale on first impression and training capacities [24]

E: ≥100

N: <100 (surgeons and surgical trainees)

In a hospital during daily work time

1 = very bad/useless

5 = excellent/very useful

Mean score 4.0 “good”, mean score 3.3 “relatively low”
Five-point modified Likert scale on acceptability of the simulator [23]E: Board certified urologistsConference

0.0 = totally unacceptable,

1.0 = moderately acceptable,

2.0 = slightly unacceptable,

3.0 = slightly acceptable,

4.0 = moderately acceptable,

5.0 = totally acceptable

3.0 = slightly acceptable
Seven-point Likert scale on realism and usefulness [16]

E: >50

N: ≤50 (gynecological surgeons)

Conference/course

Realism:

1 = absolutely not realistic

2 = not realistic

3 = somewhat not realistic

4 = undecided

5 = somewhat realistic

6 = realistic

7 = absolutely realistic

Usefulness:

1 = strongly disagree

7 = strongly agree

Realism:

5 = somewhat realistic

Usefulness:

6 = useful

Ten-point scale on realism, effectiveness and applicability [19]

E: >50

N: ≤50 (surgeons and surgical residents)

Conference/course + in a hospital during daily work time10 = very positive≥8 “positive”

See file “BarbaraSchout validation critical review_submission_Table 1”

Methods used to quantify and interpret face and content validity E: >30/year (specialists) N: <30/year (specialists or industry employees) E: >1,000 Experienced: 200–1,000 Intermediate experienced: <200 N: no endoscopy experience (interns, residents or specialists) Realism: 1 = very unrealistic 4 = very realistic Usefulness: not mentioned 2.95 = “good” 2.57 = “doubtful” E: >100 N: ≤100 (surgeons and residents) 1 = not realistic/good/useful 5 = very realistic/good/useful E: ≥100 N: <100 (surgeons and surgical trainees) 1 = very bad/useless 5 = excellent/very useful 0.0 = totally unacceptable, 1.0 = moderately acceptable, 2.0 = slightly unacceptable, 3.0 = slightly acceptable, 4.0 = moderately acceptable, 5.0 = totally acceptable E: >50 N: ≤50 (gynecological surgeons) Realism: 1 = absolutely not realistic 2 = not realistic 3 = somewhat not realistic 4 = undecided 5 = somewhat realistic 6 = realistic 7 = absolutely realistic Usefulness: 1 = strongly disagree 7 = strongly agree Realism: 5 = somewhat realistic Usefulness: 6 = useful E: >50 N: ≤50 (surgeons and surgical residents) See file “BarbaraSchout validation critical review_submission_Table 1” It is not only important to decide how validity is to be determined; it is also important to decide who is best suited to undertake this task. The literature offers no detailed answers in this regard. It may be advisable to entrust this task to focus groups of specialists who are experts in the procedure in question and in judging simulators. Perhaps judges should also be required to possess good background knowledge on simulators and simulator development. Preferred settings of validation studies need to be considered as well. So far, most tests of face and content validity of surgical simulators have been conducted at conferences (Table 1), where participants are easily distracted by other people and events. Selection bias may also be inherent in this setting, because those who do not believe in simulator training are unlikely to volunteer to practice on a simulator, let alone participate in a validation study.

Objective approaches (experimental studies)

Experimental studies on the simulator

Several studies have examined the construct (discriminative) validity of simulators for endourological procedures [25, 26]. Although the concept of construct validity is somewhat clearer than that of subjective studies of validity, there was substantial variation in methods, data analysis, participants, and outcome parameters. Between 1980 and 2008 several studies examined construct validity in relation to endourological simulators [25]. Table 2 presents the methods used in these studies, in which medical students and residents were the novices, and specialists fulfilled the role of experts, unless mentioned otherwise. Time taken to complete a procedure was a parameter used in all the studies. Time is considered a parameter of importance, but it is not necessarily indicative of achievement of the desired outcome [27]. An exclusive focus on decreasing performance time may eventually result in decreased quality of outcome, suggesting that, besides time, other parameters should be taken into account in measuring validity.
Table 2

Methods and parameters used to assess construct validity

Type of procedureTraining modelMethod: training programDef. expert (E)/novice (N)Parameters
UCSVirtual realityOne task, ten repetitions, one occasion (unsupervised) [69]N: no endoscopic experience (urology nurse practitioners)Total time, no. of flags
One task, ten repetitions, one occasion [39]

E: >1000 flexible cystoscopies

N: no endoscopic experience

Total time, no. of flags
Three tasks, five times, one occasion [38]

E: ≥100 flexible and rigid cystoscopies

N: no cystoscopies performed (specialist assistents and extenders, nurses, technicians, office workers

Time
Virtual realityTwo training sessions, two occasions [70]

E: residents

N: no endoscopic experience

Mucosa inspected
URSVirtual realityIndividual 10-min practical mentoring [71]

E: medical students who had underwent training

N: no endoscopic experience

Time to bladder neck, time to ureteral orifice, time to cannulate orifice, time to calculus and total time, number of attempts at cannulation, number of times subject had to be reoriented, perforation rate, number of ureteral petechiae, OSATS
5 h training in 2 weeks [72]

E: residents with varying degrees of endoscopic experience

N: no endoscopic experience

Total time, fluoroscopy time, instrument trauma, attempts at cannulation, OSATS
Five 30-min training sessions over 2 weeks [73]

E: medical students who had underwent training (no exact no. mentioned)

N: no endoscopic experience (no exact no. mentioned)

Total time, fragmentation time, trauma, perforation, insert guidewire, ability to perform task, overall performance, OSATS, self-evaluation
No training [30]

E: >80 URS

N: <40 URS (urologists)

Mean time, time of progression to stone contact, X-ray exposure time, bleeding events, clearance of stone
Ten 30-min training sessions in 2 weeks [31]

E: residents

N: no endoscopic experience

Overall time, fluoroscopy time, trauma, no. of URS attempts, OSATS
No training [74]

E: senior residents (6.4 ± 2.3 URS performed)

N: junior residents (1 ± 6 URS performed)

Time, scope trauma, instrument trauma, percent passing, guidewire insertion attempts, OSATS, checklist score
Bench1 h practice session [75]

E: senior residents

N: junior residents

Time, pass rating, checklist score, OSATS
BenchTwo 2-day courses with 8 and 16 h practice, respectively [29]N: 14 participants with no experience, 5 performed 6–10 URS, 7 performed 1-5 URSChecklist, OSATS, total score
TURBTVirtual realityTwo training sessions, two occasions [70]

E: residents

N: no endoscopic experience

Total time, tumors treated, blood loss
TURPVirtual realityNo training [23]

E: board-certified urologists

N: with master level education or above

Orientation time, coagulation time, cutting time, grams resected, cuts at tissue, total fluid use, cut pedal presses, blood loss
No training [76]

E: urologists

N: master’s degree educational level, Resident: in training

Orientation time, coagulation time, cutting time, blood loss, gm resected, tissue cuts
Virtual realitySix tasks, one occasion [77]N: no endoscopic experienceTime during which there was high pressure resected volume, blood loss, distance the resectoscope tip was moved, amount of absorbed irrigation fluid
PercVirtual realityTwo 30-min training sessions separated by a minimum of 24 h [78]31 students, 31 residents, 1 fellowTotal time, fluoroscopy time, attempted needle punctures, blood vessel injuries, collecting system perforation

See file “BarbaraSchout validation critical review_submission_Table 2”

Methods and parameters used to assess construct validity E: >1000 flexible cystoscopies N: no endoscopic experience E: ≥100 flexible and rigid cystoscopies N: no cystoscopies performed (specialist assistents and extenders, nurses, technicians, office workers E: residents N: no endoscopic experience E: medical students who had underwent training N: no endoscopic experience E: residents with varying degrees of endoscopic experience N: no endoscopic experience E: medical students who had underwent training (no exact no. mentioned) N: no endoscopic experience (no exact no. mentioned) E: >80 URS N: <40 URS (urologists) E: residents N: no endoscopic experience E: senior residents (6.4 ± 2.3 URS performed) N: junior residents (1 ± 6 URS performed) E: senior residents N: junior residents E: residents N: no endoscopic experience E: board-certified urologists N: with master level education or above E: urologists N: master’s degree educational level, Resident: in training See file “BarbaraSchout validation critical review_submission_Table 2” In general surgery there is a similar awareness of discrepancies in the usage and interpretation of construct validity and outcome parameters. Thijssen et al. conducted a systematic review of validation of virtual-reality (VR) laparoscopy metrics, searching two databases and including 40 publications out of 643 initial search results [28]. The data on construct validation were unequivocal for “time” in four simulators and for “score” in one simulator [28], but the results were contradictory for all the other VR metrics used. These findings led those authors to recommend that outcome parameters for measuring simulator validity should be reassessed and based on analysis of expert surgeons’ motions, decisive actions during procedures, and situational adaptation.

Transfer of simulator-acquired skills to performance in patients

Only three studies have examined criterion validity of endourological simulators [25, 29–31]. Ogan et al. demonstrated that training on a VR ureterorenoscopy (URS) simulator improved performance on a male cadaver [31]. Knoll et al. trained five residents in the URS procedure on the URO Mentor and compared their performances on the simulator with performances in patients by five other residents by having unblinded supervisors rate the residents’ performances [30]. Brehmer et al. compared experts’ real-time performances with their performances on a simulator [29]. Transfer studies of laparoscopic and endoscopic simulators have shown very positive results regarding improvement of real-time performances [12, 29–37]. These results should be interpreted with caution, however, because of small sample sizes (frequently less than 30), lack of randomization, supervisors who were not blinded to type of training, groups with dissimilar backgrounds (e.g., surgical and nonsurgical residents), and/or studies limited to a comparison between experts’ performances on a simulator and in the operating room but not between experts’ and novices’ performances. Also, some of these studies did not use real patients but human cadavers or animal models to measure real-time performance [31, 33]. Ethical and legal concerns may hamper transfer studies where the ideal study protocol would involve groups of trained and untrained participants performing the procedure of interest in a patient. However, even though today many residents learn procedures in patients without prior training on a simulator, this type of study is unlikely to gain the approval of Medical Review Ethics Committees, especially if a study tests the hypothesis that trained participants will outperform controls, implying that the patient is at risk when procedures are performed by controls.

Definition of novices and experts

An important issue in validity research is defining participants’ levels of expertise. Generally, the term “novices” designates persons with no experience at all in performing the procedure under study, while the term “expert” refers to specialists with ample experience in performing the procedure in patients. However, some studies labeled participants with only some experience as “novices” while residents who had not yet completed the learning curve were considered “experts” (Tables 1 and 2). In the absence of clear standards for classifying experts and novices, researchers apparently use arbitrary cutoff points. With regard to urethrocystoscopy, for example, Gettman et al. classified those who had performed 100 procedures or more as experts [38], whereas Shah et al. required performance of > 1,000 procedures for qualification as an expert [39]. Apart from differences regarding the number of procedures used as the cutoff point between novice and expert, it is questionable whether it is at all defensible to use number of procedures performed as a measure of expertise. For one thing, self-estimated numbers are likely to be unreliable [40] and, furthermore, having performed more procedures does not automatically correlate with increased quality of performance. It might be better to focus on external assessment of expertise or a more objective standard to discriminate between experts and novices.

Recommendations for validation and implementation of surgical training models

It is inadvisable to use training models before their validity as an educational tool has been proven by research [5, 6, 14, 15]. However, there is as yet no consensus on appropriate methods and parameters to be used in such studies. So far validity studies have mainly focused on technical skills. Although these skills are important they are not the only aspect of operating on patients. The problems concerning transfer studies and the diversity of study outcomes demonstrate that it may be better to design and evaluate a comprehensive training program instead of validating only one aspect or part of a procedure that can be performed on a simulator. This requires an understanding of educational theories and backgrounds and a multidisciplinary approach in which specialists, residents, educationalists, and industrial designers collaborate. In addition, we should learn from experiences in other domains, such as the military and aviation, where similar difficulties with regard to the use of simulators in training are encountered.

Integration of training needs analysis and training program design in developing training facilities

“For a long time, simulator procurement for military training purposes has been mainly a technology-pushed process driven by what is offered on the market. In short, the more sophisticated the simulator’s capabilities, the more attractive it is to procure. Training programmes are later developed based upon the device procured, sometimes only for the training developers to conclude that the simulator “did not meet the requirements” or, even worse, that it was unusable because of a complete mismatch between the capabilities and limitations of the device on the one hand and the basic characteristics and needs of the trainees on the other” [41]. Nowadays, there is awareness of the mechanism described by Farmer et al. within surgical communities too, and there is also a growing realization of the need to reevaluate the methods and approaches used in developing surgical training programs. In military training in the 1990s there was a generally acknowledged need for an integrated framework as well as research and development of simulations based on the realization that the world was changing and conditions and constraints were evolving [41]. It was stated that “simulation by itself cannot teach” and this concept led to the Military Applications of Simulator and Training concepts based on Empirical Research (MASTER) project in 1994, in which 23 research and industrial organizations in five countries combined their knowledge to develop generic concepts and common guidelines for the procurement, planning, and integration of simulators for use in training. The MASTER project underlined the importance of three key phases of program development: training needs analysis (TNA), training program design (TPD), and training media (simulators, for example) specification (TMS) [41]. These phases have also been described in the medical education literature [2, 42]. TNA involves task analysis and listing the pitfalls of a procedure that need to be trained. When training needs and the place of a simulator in the curriculum are analyzed before a simulator is actually introduced, a major problem of validation studies can be avoided, namely the fact that some simulators train and measure different, not equally relevant, parameters [43]. TPD follows TNA, and is concerned with organizing the existing theoretical and practical knowledge about the use of simulators with a focus on outlining training program requirements. Following TPD, the TMS phase focuses on simulator requirements. Validation has its place in this phase. As Satava stated “Simulators are only of value within the context of a total educational curriculum” and “the technology must support the training goals” [44]. Figures 1 and 2 present a ten-step approach to developing surgical training programs. Figure 1 represents the preparation phase, consisting of training needs analyses. Figure 2 shows a recommended approach to evaluating and implementing surgical simulators in curricula. For every new training program it should be considered whether all the steps of the process are feasible and cost effective. New developments and improvement of education mostly require financial investments. However, in order to minimize costs it is important to consider the expected benefits as well as possible drawbacks and the costs that go along with those.
Fig. 1

The training needs analysis phase of training program development. See file “BarbaraSchout validation critical review_submission_Figure 1”

Fig. 2

Creating a training program, including Training Program Design and Training Media (model) Specification. See file “BarbaraSchout validation critical review_submission_Figure 2”

The training needs analysis phase of training program development. See file “BarbaraSchout validation critical review_submission_Figure 1” Creating a training program, including Training Program Design and Training Media (model) Specification. See file “BarbaraSchout validation critical review_submission_Figure 2” Accreditation and certification are also very important aspects that need to be considered once the definitive training program has been designed. Because accreditation and certification follow program development, they are not included in Figs. 1 and 2.

Integration of nontechnical factors that influence practical skills performances

As early as 1978 Spencer et al. pointed out that a skillfully performed operation is 75% decision making and only 25% dexterity [45]. Nontechnical (human) factors strongly influence residents’ and students’ performances [3, 14, 46–57]. Moreover, research concerning safety in surgery has shown that adverse events are frequently preceded by individual errors, which are influenced by diverse (human) factors [9, 58]. Surgical training is still very much focused on technical skills, although a skillslab environment may be an ideal situation for integrating technical and nontechnical factors. There is still a gap between research into human factors and educational research [41]. Taking account of expertise on human factors early in the development of training programs and also in the specification of training media can make a considerable contribution to improved the validity and cost-effectiveness of training [41]. Effective surgical training depends on programs that are realistic, structured, and grounded in authentic clinical contexts that recreate key components of the clinical experience [8, 9, 14, 56, 59, 60]. Ringsted et al. showed that factors involved in the acquisition of technical skills can be divided into three main groups: task, person, and context [53]. The model of the acquisition of surgical practical skills shown in Fig. 3 is based on these groups. It illustrates the complexity of a learning process that is affected by various factors.
Fig. 3

Factors that influence performance of practical skills of trainees. See file “BarbaraSchout validation critical review_submission_figure 3”

Factors that influence performance of practical skills of trainees. See file “BarbaraSchout validation critical review_submission_figure 3”

Collaboration of specialists, residents, educationalists, and industrial designers

Curriculum design is not a task that should be left to one individual. Preferably, it involves multidisciplinary consultations and research [41]. When specialists, residents, educationalists, and industrial designers collaborate and share their knowledge they will be able to make progress in developing and implementing simulator training in curricula [61]. Simulators can assist clinical teachers and relieve some of their burden. Not every specialist is a good teacher. Superior performance of procedures in patients does not automatically imply similar excellence in teaching others to do the same. Currently, training of medical skills during procedures on patients depends largely on the willingness of trainers to allow trainees to practice and improve their diagnostic and procedural skills. As a result, training is strongly teacher and patient centered [62]. A skillslab environment offers a much more learner-centered educational environment [8]. However, this can only be achieved if not only specialists (teachers), but also residents (learners), educationalists (teaching the teachers), and industrial designers (suppliers of teaching facilities) are allowed to contribute their expertise to developing the content of training programs.

Development and evaluation of assessment methods

Performance assessment tools are needed to evaluate and validate surgical simulators. Several methods that have been developed or are being developed involve the use of simulators not only to practice but also to assess skills. VR and augmented-reality (AR) simulators allow automatic gathering of objective data on performance [14, 17, 37]. However, the development of these metrics is itself an emerging field, and as we described earlier, there is no uniform approach to measuring performance with VR or AR simulators. Motion analysis, tracking how trainees move laparoscopic instruments, is a relatively new and important type of assessment [63]. Although this enables objective performance assessment, assessment methods based on data generated by VR/AR simulators and motion analysis offer limited possibilities because of their exclusive focus on technical skills and because many of these systems can only be used in training environments [63]. Another promising, upcoming factor in assessment is error analysis by means of video analysis [64-66]. Currently, the most commonly used and the only thoroughly validated method to assess technical as well as nontechnical skills is Objective Structured Assessment of Technical Skills (OSATS). OSATS can be used to assess performance on simulators as well as real-time performance in patients. Performance is usually scored by a supervisor on a five-point scale [67]. However, although OSATS has been thoroughly evaluated and validated, it has the disadvantage of being dependent on supervisors’ subjective opinions. As Miller stated in 1990, “No single assessment method can provide all the data required for judgment of anything so complex as the delivery of professional services by a successful physician” [68]. It seems eminently desirable to further develop and thoroughly evaluate and validate these assessment methods, especially for assessment of real-time performance.

Conclusion

Studies examining the validity of surgical simulators are recommended for progress in the implementation of simulators in surgical education programs. The absence in the literature of general guidelines for interpreting the results of subjective validity studies points to a need to seek consensus, if possible, and perform research to identify appropriate methods for evaluating this type of validity and for interpreting results. A considerable number of studies have addressed objective construct (discriminative) validity of simulators. However, there is considerable variation in outcome parameters and it is questionable whether the measured parameters actually reflect those aspects that are most important for novices to learn on a simulator. Few objective studies have examined whether skills learned on a simulator can be transferred successfully to patient care. This lack of studies is partly due to ethical and legal issues restricting these types of studies. Validation and integration of surgical simulators in training programs may be more efficient if training needs analysis (TNA) is performed first and program requirements are set in a training program design (TPD) phase by a multidisciplinary team, consisting of specialists, residents, educationalists, and industrial designers. Furthermore, for successful transfer of skills from simulator to patient, it is important to consider and include the influence of contextual, (inter)personal, and task-related factors in training programs, rather than merely focusing on technical skills. Multiple validated assessment methods of practical performance are essential for evaluating training programs and individual performances. Current assessments methods are few, not yet thoroughly validated, and mostly focused on technical skills only. Educational and medical communities should join forces to promote further development and validation of the available assessment methods.
  70 in total

1.  Validation of a flexible cystoscopy course.

Authors:  J Shah; B Montgomery; S Langley; A Darzi
Journal:  BJU Int       Date:  2002-12       Impact factor: 5.588

2.  Analysis of errors in laparoscopic surgical procedures.

Authors:  N E Seymour; A G Gallagher; S A Roman; M K O'Brien; D K Andersen; R M Satava
Journal:  Surg Endosc       Date:  2004-03-19       Impact factor: 4.584

3.  Clinical skills centres: where are we going?

Authors:  Paul Bradley; John Bligh
Journal:  Med Educ       Date:  2005-07       Impact factor: 6.251

4.  Virtual-reality training improves angled telescope skills in novice laparoscopists.

Authors:  Sabha Ganai; Joseph A Donroe; Myron R St Louis; Giavonni M Lewis; Neal E Seymour
Journal:  Am J Surg       Date:  2007-02       Impact factor: 2.565

5.  Training on bench models improves dexterity in ureteroscopy.

Authors:  Marianne Brehmer; Robert Swartz
Journal:  Eur Urol       Date:  2005-09       Impact factor: 20.096

Review 6.  Surgical simulation: a urological perspective.

Authors:  Geoffrey R Wignall; John D Denstedt; Glenn M Preminger; Jeffrey A Cadeddu; Margaret S Pearle; Robert M Sweet; Elspeth M McDougall
Journal:  J Urol       Date:  2008-03-17       Impact factor: 7.450

7.  The assessment of clinical skills/competence/performance.

Authors:  G E Miller
Journal:  Acad Med       Date:  1990-09       Impact factor: 6.893

Review 8.  The human face of simulation: patient-focused simulation training.

Authors:  Roger Kneebone; Debra Nestel; Cordula Wetzel; Steven Black; Ros Jacklin; Raj Aggarwal; Faranak Yadollahi; John Wolfe; Charles Vincent; Ara Darzi
Journal:  Acad Med       Date:  2006-10       Impact factor: 6.893

9.  Simulation technology for health care professional skills training and assessment.

Authors:  S B Issenberg; W C McGaghie; I R Hart; J W Mayer; J M Felner; E R Petrusa; R A Waugh; D D Brown; R R Safford; I H Gessner; D L Gordon; G A Ewy
Journal:  JAMA       Date:  1999-09-01       Impact factor: 56.272

10.  Impact of hand dominance, gender, and experience with computer games on performance in virtual reality laparoscopy.

Authors:  T P Grantcharov; L Bardram; P Funch-Jensen; J Rosenberg
Journal:  Surg Endosc       Date:  2003-05-06       Impact factor: 4.584

View more
  50 in total

1.  Validation of a novel resin-porcine thorax model for chest drain insertion training.

Authors:  T R Naicker; E A Hughes; D T McLeod
Journal:  Clin Med (Lond)       Date:  2012-02       Impact factor: 2.659

Review 2.  Simulation in surgical education.

Authors:  Vanessa N Palter; Teodor P Grantcharov
Journal:  CMAJ       Date:  2010-03-29       Impact factor: 8.262

3.  Decomposition and analysis of laparoscopic suturing task using tool-motion analysis (TMA): improving the objective assessment.

Authors:  J B Pagador; F M Sánchez-Margallo; L F Sánchez-Peralta; J A Sánchez-Margallo; J L Moyano-Cuevas; S Enciso-Sanz; J Usón-Gargallo; J Moreno
Journal:  Int J Comput Assist Radiol Surg       Date:  2011-08-14       Impact factor: 2.924

4.  Description and validation of realistic and structured endourology training model.

Authors:  Federico Soria; Esther Morcillo; Juan Luis Sanz; Alberto Budia; Alvaro Serrano; Francisco M Sanchez-Margallo
Journal:  Am J Clin Exp Urol       Date:  2014-10-02

Review 5.  Procedural virtual reality simulation in minimally invasive surgery.

Authors:  Cecilie Våpenstad; Sonja N Buzink
Journal:  Surg Endosc       Date:  2012-09-07       Impact factor: 4.584

6.  Development and validation of a theoretical test in basic laparoscopy.

Authors:  Jeanett Strandbygaard; Mathilde Maagaard; Christian Rifbjerg Larsen; Lars Schouenborg; Christian Ottosen; Charlotte Ringsted; Teodor Grantcharov; Bent Ottesen; Jette Led Sorensen
Journal:  Surg Endosc       Date:  2012-12-14       Impact factor: 4.584

7.  Comparative analysis of the functionality of simulators of the da Vinci surgical robot.

Authors:  Roger Smith; Mireille Truong; Manuela Perez
Journal:  Surg Endosc       Date:  2014-08-15       Impact factor: 4.584

8.  Task and crisis analysis during surgical training.

Authors:  Patrick Wucherer; Philipp Stefan; Simon Weidert; Pascal Fallavollita; Nassir Navab
Journal:  Int J Comput Assist Radiol Surg       Date:  2014-01-09       Impact factor: 2.924

9.  Assessing visual control during simulated and live operations: gathering evidence for the content validity of simulation using eye movement metrics.

Authors:  Samuel J Vine; John S McGrath; Elizabeth Bright; Thomas Dutton; James Clark; Mark R Wilson
Journal:  Surg Endosc       Date:  2014-01-11       Impact factor: 4.584

10.  Surgeon-Authored Virtual Laparoscopic Adrenalectomy Module Is Judged Effective and Preferred Over Traditional Teaching Tools.

Authors:  Sergei Kurenov; Juan Cendan; Saleh Dindar; Kristopher Attwood; James Hassett; Ruth Nawotniak; Gregory Cherr; William G Cance; Jörg Peters
Journal:  Surg Innov       Date:  2016-10-07       Impact factor: 2.058

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.