Danielle Dumestre1, Justin K Yeung2, Claire Temple-Oberle3. 1. University of Calgary, Alberta, Canada. 2. Department of Surgery, Division of Plastic and Reconstructive Surgery, University of Calgary, Alberta, Canada. 3. Department of Surgery, Division of Plastic and Reconstructive Surgery, University of Calgary, Alberta, Canada. Electronic address: claire.temple-oberle@albertahealthservices.ca.
Abstract
OBJECTIVES: The purpose of this study is to (1) systematically review all the literature pertaining to microsurgical training models and to (2) determine which of these are specific to and validated for microsurgery training. DESIGN: PubMed, MEDLINE (OVID/EBSCO), Google Scholar, and Cochrane Central Register of Controlled Trials were searched using preset terms. The last search date was in July 2012. Articles of all languages, years of publication, sample sizes, and model types pertaining to microsurgery were included. The eligibility criteria included the use of a microsurgical training model on a subject group with statistical analysis and measures of validation. Two assessors independently reviewed the articles and their references. RESULTS: Of the 238 articles reviewed, 9 articles met the criteria. Those excluded were predominantly model descriptions that had not been validated in a set of learners. The 9 models whose performances were assessed in a group of learners included an online curriculum, nonliving prosthetics and biologics, and the live rat femoral artery model. Each model was evaluated for content, construct, face, and criterion (concurrent and predictive) validity, as well as selection and observation/expectant bias. Content, construct, concurrent, and face validities were consistently demonstrated for all 9 models. Selection bias was also reliably well controlled with random allocation of participants to each study group. Observation/expectant bias was controlled in 6 of the 8 papers. Predictive validity, an arguably more difficult factor to measure, was only present in 1 article. CONCLUSIONS: Despite a plethora of papers describing microsurgical learning tools, only 9 were discovered that provided validation of the proposed method of microsurgical skills acquisition. This review depicts the need for basic, yet well-designed studies that substantiate the effectiveness of microsurgical training models by using a subject group and demonstrating a statistical improvement with employment of the model. Ease of access, cost, and assessment tools used also require attention.
OBJECTIVES: The purpose of this study is to (1) systematically review all the literature pertaining to microsurgical training models and to (2) determine which of these are specific to and validated for microsurgery training. DESIGN: PubMed, MEDLINE (OVID/EBSCO), Google Scholar, and Cochrane Central Register of Controlled Trials were searched using preset terms. The last search date was in July 2012. Articles of all languages, years of publication, sample sizes, and model types pertaining to microsurgery were included. The eligibility criteria included the use of a microsurgical training model on a subject group with statistical analysis and measures of validation. Two assessors independently reviewed the articles and their references. RESULTS: Of the 238 articles reviewed, 9 articles met the criteria. Those excluded were predominantly model descriptions that had not been validated in a set of learners. The 9 models whose performances were assessed in a group of learners included an online curriculum, nonliving prosthetics and biologics, and the live rat femoral artery model. Each model was evaluated for content, construct, face, and criterion (concurrent and predictive) validity, as well as selection and observation/expectant bias. Content, construct, concurrent, and face validities were consistently demonstrated for all 9 models. Selection bias was also reliably well controlled with random allocation of participants to each study group. Observation/expectant bias was controlled in 6 of the 8 papers. Predictive validity, an arguably more difficult factor to measure, was only present in 1 article. CONCLUSIONS: Despite a plethora of papers describing microsurgical learning tools, only 9 were discovered that provided validation of the proposed method of microsurgical skills acquisition. This review depicts the need for basic, yet well-designed studies that substantiate the effectiveness of microsurgical training models by using a subject group and demonstrating a statistical improvement with employment of the model. Ease of access, cost, and assessment tools used also require attention.
Authors: Henry D Greyner-Almeida; Ali Mahdavi Fard; Chi Chen; Jiwei Zhao; Sangita P Patel Journal: Int Ophthalmol Date: 2022-01-29 Impact factor: 2.029
Authors: G Perez-Abadia; M Janko; L Pindur; M Sauerbier; J H Barker; I Joshua; I Marzi; J Frank Journal: Eur J Trauma Emerg Surg Date: 2017-02-04 Impact factor: 3.693
Authors: Edward Meinert; Jessie Eerens; Christina Banks; Stephen Maloney; George Rivers; Dragan Ilic; Kieran Walsh; Azeem Majeed; Josip Car Journal: JMIR Med Educ Date: 2021-03-11
Authors: Jeffrey J Olson; Bo Zhang; Diana Zhu; Evan T Zheng; George S M Dyer; Tamara D Rozental; Dawn M LaPorte Journal: JB JS Open Access Date: 2021-02-19
Authors: Tiago Guedes da Motta Mattar; Gustavo Bispo Dos Santos; João Paulo Mota Telles; Marcelo Rosa de Rezende; Teng Hsiang Wei; Rames Mattar Júnior Journal: Clinics (Sao Paulo) Date: 2021-10-18 Impact factor: 2.365