Literature DB >> 35900153

The use of objective assessments in the evaluation of technical skills in cardiothoracic surgery: a systematic review.

Nabil Hussein1,2, Jef Van den Eynde3, Connor Callahan4, Alvise Guariento5, Can Gollmann-Tepeköylü6, Malak Elbatarny7, Mahmoud Loubani1,2.   

Abstract

OBJECTIVES: With reductions in training time and intraoperative exposure, there is a need for objective assessments to measure trainee progression. This systematic review focuses on the evaluation of trainee technical skill performance using objective assessments in cardiothoracic surgery and its incorporation into training curricula.
METHODS: Databases (EBSCOHOST, Scopus and Web of Science) and reference lists of relevant articles for studies that incorporated objective assessment of technical skills of trainees/residents in cardiothoracic surgery were included. Data extraction included task performed; assessment setting and tool used; number/level of assessors; study outcome and whether the assessments were incorporated into training curricula. The methodological rigour of the studies was scored using the Medical Education Research Study Quality Instrument (MERSQI).
RESULTS: Fifty-four studies were included for quantitative synthesis. Six were randomized-controlled trials. Cardiac surgery was the most common speciality utilizing objective assessment methods with coronary anastomosis the most frequently tested task. Likert-based assessment tools were most commonly used (61%). Eighty-five per cent of studies were simulation-based with the rest being intraoperative. Expert surgeons were primarily used for objective assessments (78%) with 46% using blinding. Thirty (56%) studies explored objective changes in technical performance with 97% demonstrating improvement. The other studies were primarily validating assessment tools. Thirty-nine per cent of studies had established these assessment tools into training curricula. The mean ± standard deviation MERSQI score for all studies was 13.6 ± 1.5 demonstrating high validity.
CONCLUSIONS: Despite validated technical skill assessment tools being available and demonstrating trainee improvement, their regular adoption into training curricula is lacking. There is a need to incorporate these assessments to increase the efficiency and transparency of training programmes for cardiothoracic surgeons.
© The Author(s) 2022. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery.

Entities:  

Keywords:  Cardiothoracic surgery; Objective assessment; Resident training; Simulation; Technical skills

Mesh:

Year:  2022        PMID: 35900153      PMCID: PMC9403301          DOI: 10.1093/icvts/ivac194

Source DB:  PubMed          Journal:  Interact Cardiovasc Thorac Surg        ISSN: 1569-9285


INTRODUCTION

Technical skill development in cardiothoracic surgery (CTS) training has been primarily via direct experience in the operation room. The growth of simulation-based training has provided an adjunct to training with programmes successfully incorporating these methods into their curricula [1-5]. However, with the reduction in training time and intraoperative exposure, there is a greater need for the incorporation of objective assessments to facilitate trainee progression [1, 6–10]. This systematic review focuses solely on the use of objective assessments in the evaluation of technical skill performance for surgical trainees/residents in CTS. Furthermore, it will explore whether such assessment methods have been successfully incorporated into training programmes.

METHODS

Ethics statement

Institutional Review Board review was not required for this review as patient data were not used and participant data was anonymous.

Eligibility criteria, databases and search strategy

This study followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) reporting guideline [11]. Inclusion criteria included original research studies (i.e. randomized controlled trials, observation, cohort, case control and cross-sectional studies) published in English-peer reviewed journals and related to adult and paediatric CTS. Only studies that incorporated objective assessment of technical skills in CTS were included. The following definitions were used for clarity: (i) ‘technical skill’—any hands-on action by a surgeon/trainee in the operating room or simulated operating environment (including benchtop, wet-lab or virtual reality simulation); (ii) ‘assessment’—the reporting of a surgeon/trainees’ technical proficiency/performance of the specified task; (iii) ‘objective’—predefined, structured scoring criteria that is used to evaluate performance. Furthermore, only studies that included the technical skill assessment of trainee/resident surgeons were included (i.e. studies looking at only medical students or established/expert surgeons were excluded). Validation studies were also included if the above inclusion criteria were satisfied. Exclusion criteria included unpublished abstracts, posters, opinions, case reports, reviews, letters to editors and editorials. Papers were also excluded if they reported on non-technical skills training or skills that are not directly CTS related. The latter comprised skills that are not routinely included in CTS training curricula, such as echocardiography, cardiac catheterization and endovascular skills. The following sources were searched with the assistance of a medical librarian, for articles that met our inclusion criteria and were published by October 2021: EBSCOHOST (MEDLINE, Academic Search Premier, CINAHL complete), Scopus, Web of Science Core Collection and reference lists of relevant articles. Terms used for the search included the (i) type of surgery (i.e. cardiothoracic, cardio*, cardiac, thoracic, congenital, heart, surgery), (ii) method of assessment (i.e. objective, competence*, technical, skill*, perform*, assess*, tool) and (iii) subject/participant (i.e. trainee, resident, registrar, fellow). The detailed search strategies are provided in Supplementary Material, Appendix SE1. The following steps were taken for study selection: (i) identification of titles through database search, (ii) removal of duplicates, (iii) screening of abstracts, (iv) assessment of full-text articles for eligibility and (v) final inclusion into the study. Studies were selected by 2 independent reviewers (N.H. and J.V.D.E.) according to the inclusion/exclusion criteria. When there was any conflict a third reviewer (C.C.) made the decision to include or exclude the study.

Data collection

Data extraction was completed independently by 2 authors (N.H. and C.C.). Inconsistencies were resolved by consensus. Information was collected on the following 15 items: (i) study demographics, (ii) study type, (iii) CTS subspeciality, (iv) participant level (i.e. resident, consultant/attendee, medical student), (v) number of participants, (vi) task performed/assessed, (vi) type of assessment tool (i.e. Likert scale, checklist, hybrid), (vii) time assessment, (viii) setting of assessment [i.e. intraoperative, simulation (animal, synthetic, hybrid, cadaveric)], (ix) number of assessor(s), (x) level of assessor(s), (xi) use of video assessment, (xii) blinding of assessor (i.e. participant identification or attempt number removed), (xiii) retrospective versus prospective assessment, (xiv) outcome of study (i.e. improvement in performance, validation only) and (xv) if the assessment method was established into the training curriculum (Table 1).
Table 1:

Data extraction table of all studies included in the systematic review

AuthorYearStudy Type (cohort number)SpecialityParticipant (n)Task performedFormat of assessmentTime assessmentSetting of assessmentLevel of assessor (n)Video usedAssessor blindedRetrospective/prospective assessmentOutcomeEstablished in curriculumTotal MERSQI score
Bedetti [29]2018NR (2)ThoracicTrainee, surgeon (20)LobectomyLikertYSimulation VRSimulator assessedNNAPImprovementNS11
Blum [30]2004R (3)ThoracicTrainee (13)BronchoscopyTime + cuesYIntraoperative, simulationExpert (1)NNPImprovementY13.5
Bohnen [31]2018NR (2)ThoracicTrainee, surgeon (14)Emergency thoracotomyLikertYSimulation-syntheticExpert (2)YYRValidation onlyN13.5
Brandao [32]2021NR (1)CardiacTrainee (16)CABG, AVR, MVRLikertYSimulation-animalNSNNSPOther trainees better preparedNS10.5
Duffy [33]2019NR (1)CardiacTrainee, student, surgeon (14)SVG harvestingLikertNSimulation-syntheticExpert + low experience (2)YYRValidation onlyN14.5
Fann [34]2010NR (1)CardiacTrainee (33)Coronary-anastomosisLikertNSimulation-animal + syntheticExpert (3)YYRImprovementY13
Fouilloux [35]2015R (2)CardiacTrainee (9)CPB-managementLikertNSimulation-animalExpert (2)YYRImprovementNS15.5
Ghazy [36]2019NR (2)ThoracicTrainee (10)Flexible bronchoscopyTimeYSimulation-syntheticNANNPImprovementNS13.5
Greenhouse [37]2013NR (3)CardiacTrainee, surgeon (19)MVRLikertYSimulation-syntheticExpert (2)YYRValidation onlyY13.5
Hance [6]2005NR (3)CardiacTrainee, surgeon (40)Coronary-anastomosisLikertNSimulation-animal + syntheticExpert (4)YYRValidation onlyY15.5
Hermsen [26]2020NR (2)CardiacTrainee, surgeon (6)CPBChecklistYSimulation-animalExpert (1)NNPValidation onlyNS13.5
Hicks [27]2011NR (1)CardiacTrainee (32)CPBHybridNSimulation-animalExpert (1)NNPNo-comparisonY12
Hussein [38]2020NR (2)CongenitalTrainee, surgeon (10)Arterial switch operationHybridNSimulation-syntheticExpert + low experience(9)YYRValidation onlyY14.5
Hussein [39]2020NR (1)CongenitalTrainee, surgeon (30)Arterial switch operationHybridYSimulation-syntheticExpert (1)YYRImprovementY14
Hussein [22]2021NR (2)CongenitalTrainee, surgeon (30)Norwood operationHybridYSimulation-syntheticExpert + Low experience(10)YYRImprovementY15.5
Hussein [4]2020NR (1)CongenitalTrainee (7)Congenital-operationsHybridYSimulation-syntheticExpert (1)YYRImprovementY13
Iwasaki [40]2008NR (2)ThoracicTrainee, surgeon (8)VATS-LobectomyLikertYSimulation-syntheticNSYNSRValidation onlyY12.5
Jebran [41]2019NR (2)CardiacTrainee, student (20)MI-MV surgeryLikertYSimulation-syntheticExpert (1)NNPImprovementN13.5
Jensen [42]2017NR (3)ThoracicTrainee, student, surgeon (53)VATS-LobectomyVR-simulator scoreYSimulation-VRSimulator-assessedYNAPImprovementNS15.5
Jensen [18]2019NR (3)ThoracicTrainee, student, surgeon (53)VATS-LobectomyLikertNSimulation-VRExpert (3)YYRValidation-onlyNS15.5
Joyce [43]2011NR (1)CardiacTrainee (11)MV-repairLikertYSimulation-animal + syntheticExpert (1)YYRImprovementNS13
Joyce [44]2018NR (1)CardiacTrainee (12)CPBLikertYSimulation-syntheticExpert (4)YNPValidation-onlyY12.5
Karim [45]2017NR (1)CardiothoracicTrainee (33)Cardiothoracic-casesQualitative-feedbackNIntra-operativeExpert (48)NNPValidation-onlyNS10
Kenny [46]2018NR (1)CardiothoracicTrainee (20)CPB, Wedge resectionLikertNSimulation-animalExpert (1)NNPImprovementY13
Konge [19]2012NR (3)ThoracicTrainee, Surgeon (14)Wedge-resectionLikertNIntra-operativeExpert (2)YYPValidation-onlyNS15
Korte [47]2020NR (3)CardiacTrainee, student (19)Coronary-anastomosisLikertYSimulation-animal+syntheticExpert (NS)YNSPImprovementY13.5
Lee [48]2013NR (4)CardiacTrainee, student, surgeon (5)Coronary-anastomosisLikertNSimulation-animal+syntheticExpert (10)YYRValidation onlyY14.5
Li [49]2020NR (2)CardiacTrainee (12)Coronary-anastomosisLikertNSimulation-animalExpert (2)YYRImprovementNS13.5
Liu [50]2019NR (1)CardiacTrainee (5)CPBChecklistYSimulation-syntheticExpert (1)NNPValidation onlyNS11.5
Llado-Grove [51]2015NR (1)CardiacTrainee (83)Coronary-anastomosisLikertYSimulation-syntheticNS (2)NNPImprovementY14
Lou [16]2014NR (2)CardiacTrainee, student (4)Coronary-anastomosisLikertNSimulation-syntheticLow experience(9)YYRValidation onlyNS12.5
Macfie [20]2014NR (1)ThoracicTrainee (64)Hilar-dissectionLikertNSimulation-animalExpert (NS)NNPImprovementY13
Malas [52]2018R (2)CardiacTrainee (32)Coronary-anastomosisLikertYSimulation-syntheticExpert (2)YYPImprovementNS15.5
Maluf [53]2015NR (1)CardiacTrainee (10)Coronary-anastomosisLikertNSimulation-animal + syntheticExpert (NS)YNPImprovementY10
Maricic [54]2016NR (3)ThoracicTrainee, surgeon (39)VATS-oesophageal atresiaChecklistYSimulation-syntheticNSNNPValidation onlyY14.5
Marshall [55]2012NR (1)ThoracicTrainee (13)Chest wall-resectionHybridYSimulation-animalExpert (1)NNPImprovementY13
Miura [56]2021NR (1)ThoracicTrainee (3)LobectomyHybridNIntra-operativeExpert (NS)NNPValidation onlyNS12
Nam [57]2021NR (1)CongenitalTrainee (6)ToF-repairTime+ subjective-scoreYSimulation-syntheticExpert (1)YNSRImprovementNS12
Nesbitt [17]2013NR (2)CardiacTrainee, students (21)Coronary-anastomosisLikertYSimulation-animalExpert (3)YYRImprovementNS14.5
Ortiz [58]2021NR (1)CardiothoracicTrainee (16)CTS-traumaLikertYSimulation-animalExpert (NS)YYRImprovementNS15.5
Petersen [59]2018NR (3)ThoracicTrainee, surgeon (18)VATS-LobectomyLikertNIntraoperativeExpert (2)YYRValidation onlyNS16
Price [60]2011R (2)CardiacTrainee (39)Coronary-anastomosisLikertYSimulation-animal+syntheticExpert (2)YYRImprovementNS15.5
Sardari-Nia [61]2020NR (3)CardiacTrainee, surgeon (102)MI-MV surgerySuture-accuracyYSimulation-syntheticNSYNSRImprovementY13
Spratt [62]2019R (2)CardiacTrainee (29)Coronary-anastomosisLikertNSimulation-syntheticExpert (1)YYRNo changeNS13.5
Tanaka [63]2021NR (3)ThoracicTrainee, surgeon (29)VATS-LobectomyTimeYSimulation-syntheticNSNNPValidation onlyNS13.5
Tavlasoglu [64]2013NR (2)CardiacTrainee (10)MV-repairTest of repairNSimulation-animalExpert (3)YYRImprovementNS12
Tong [65]2012NR (3)ThoracicTrainee (13)VATS-LobectomyChecklistYSimulation-animalExpert (NS)YNPValidation onlyNS12.5
Turner [66]2019NR (1)ThoracicTrainee (5)Mediastinal-stagingLikertNIntra-operativeExpert (NS)YNPValidation onlyY13.5
Turner [67]2020NR (1)ThoracicTrainee (7)Lung-resectionLikertNIntra-operativeExpert (NS)NNPImprovementNS16
Valdis [68]2016R (4)CardiacTrainee (40)Robotic ITA+MV-repairVR-simulator scoreYSimulation-animal+syntheticExpert (2)YYRImprovementNS15.5
Voduc [69]2016NR (2)ThoracicTrainees (19)Flexible-BronchoscopyLikertNIntra-operativeExpert (NS)NNPValidation onlyNS14
Whittaker [70]2019NR (3)ThoracicTrainee, student, surgeon (30)Robotic-lobectomyVR-simulator scoreYSimulation-VRSimulator-assessedNNAPValidation onlyNS14.5
Wu [71]2020NR (1)CardiacTrainee (26)Coronary-anastomosisLikertYSimulation-animalExpert (3)YYRImprovementNS13.5
Yasuda [72]2021NR (2)CardiacTrainee, student (10)Coronary-anastomosisLikertYSimulation-syntheticLow experience (1)NNPImprovementNS13.5

AVR: aortic valve replacement; CABG: coronary artery bypass grafts; CPB: cardiopulmonary bypass; CTS: cardiothoracic surgery; ITA: internal thoracic artery; MI: minimally invasive; MVR: mitral valve replacement; NA: not applicable; NR: non-randomized; NS: not specified; R: randomized; SVG: saphenous vein graft; ToF: tetralogy of Fallot; VATS: video-assisted thorocoscopic surgery; VR: virtual reality.

Data extraction table of all studies included in the systematic review AVR: aortic valve replacement; CABG: coronary artery bypass grafts; CPB: cardiopulmonary bypass; CTS: cardiothoracic surgery; ITA: internal thoracic artery; MI: minimally invasive; MVR: mitral valve replacement; NA: not applicable; NR: non-randomized; NS: not specified; R: randomized; SVG: saphenous vein graft; ToF: tetralogy of Fallot; VATS: video-assisted thorocoscopic surgery; VR: virtual reality.

MERSQI assessment

The methodological rigour of the included studies was scored using the Medical Education Research Study Quality Instrument (MERSQI). This is a validated assessment tool for quantitative appraisal of medical education across 8 domains including (i) study design, (ii) institution samples, (iii) response rate (i.e. percentage of participants who were objective assessed), (iv) type of data, (v) validity of evidence for evaluation of instrument scores, (vi) sophistication of data analysis, (vii) appropriateness of data analysis and (viii) assessment outcome [12, 13]. Descriptive statistics were used.

RESULTS

Study selection and characteristics

The literature search identified 1613 potentially relevant papers (Fig. 1). Following the removal of duplicates, records were screened by title and abstract leaving 85 full-text articles to be assessed for eligibility. A further 31 papers were excluded following full-text review leaving a total of 54 to be included in the quantitative synthesis. Reasons for exclusion are given in Fig. 1. Six of the studies (11%) were randomized controlled trials. A single cohort was used in 37% of studies (20/54), with 2 cohorts being the next most common at 33% (18/54). Three and 4 cohort studies comprised 24% (13/54) and 6% of the sample, respectively (3/54; Fig. 2). Cardiac surgery was the most common subspeciality utilizing objective assessment methods at 50% (27/54), followed by thoracic surgery (35%, 19/54), congenital surgery (9%, 5/54) and CTS in general (6%, 3/54). There were no studies in cardiothoracic transplantation (Fig. 3A).
Figure 1:

PRISMA flow diagram of studies include in data search. PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-analyses.

Figure 2:

Chart demonstrating the number of studies which were randomized versus non-randomized (A) and number of cohorts that were included in the studies (B).

Figure 3:

Studies broken down by subspeciality in cardiothoracic surgery (A). Number of participants included in studies (B).

PRISMA flow diagram of studies include in data search. PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-analyses. Chart demonstrating the number of studies which were randomized versus non-randomized (A) and number of cohorts that were included in the studies (B). Studies broken down by subspeciality in cardiothoracic surgery (A). Number of participants included in studies (B).

Participant characteristics

Fifty-six per cent (30/54) of studies included only trainees/residents, while 28% (15/54) assessed trainees/residents + expert surgeons, 7% (4/54) used trainees/residents + medical students and 11% (6/54) used trainees/residents + medical students + experts. The average number of participants across all studies was 23 ranging from 1 to 5 participants (5/54) to >51 participants (5/54; Fig. 3B).

Assessment characteristics

The most common task assessed was coronary anastomosis at 28% (15/54), followed by pulmonary lobectomy (19%, 10/54), cardiopulmonary bypass and mitral valve surgery (both 11% [6/54]). Table 2 demonstrates the full list of tasks included in all studies.
Table 2:

List of technical skill tasks performed during studies that were objectively assessed

Task performedNumber of studies, n (%)
Cardiac surgery
 Coronary anastomosis15 (28)
 Cardiopulmonary bypass6 (11)
 Mitral valve surgery6 (11)
 Conduit harvesting2 (4)
 Aortic valve surgery1 (2)
Thoracic surgery
 Pulmonary lobectomy10 (19)
 Bronchoscopy3 (6)
 Wedge resection2 (4)
 Hilar dissection1 (2)
 VATS oesophageal surgery1 (2)
 Invasive mediastinal staging1 (2)
Cardiothoracic surgery
 Emergency scenarios3 (6)
Congenital cardiac surgery
 Arterial switch operation2 (4)
 Norwood operation1 (2)
 Tetralogy of Fallot repair1 (2)
 Multiple congenital procedures1 (2)
List of technical skill tasks performed during studies that were objectively assessed The objective assessment of technical skills was present in all studies. The Likert scale [i.e. Objective Structured Assessment of Technical Skills (OSATS tool)] was the most commonly used assessment tool at 61% (33/54), while a checklist-based assessment was used in only 7% (4/54) of studies. A hybrid assessment combining both the Likert and checklist assessment methods was used in 13% (7/54) of studies. Other objective assessments included: time only (7%, 4/54), automated virtual reality simulator score (6%, 3/54), suture accuracy, test of repair and qualitative feedback (2% each, 1/54). Time assessments were used in 60% of all studies (Fig. 4A).
Figure 4:

Chart demonstrating the types of objective assessment methods used to assess technical skill performance (A) and the setting of the assessment (B).

Chart demonstrating the types of objective assessment methods used to assess technical skill performance (A) and the setting of the assessment (B). Eighty-five per cent (46/54) of assessments were made in the setting of simulation with only 15% being intraoperative assessments (8/54). Synthetic simulators were the most common at 39% (21/54) of all assessments, followed by animal (26%, 14/54) and hybrid simulation comprising both synthetic and animal simulators (13%, 7/54). Virtual reality simulator assessments were present in 7% (4/54) of studies (Fig. 4B). Twenty-six per cent (14/54) of studies did not specify the number of assessors/evaluators used for the objective assessments. Of the remaining studies (40), 35%(14) used a single evaluator, with 28% (11) using 2 evaluators. Three (5) and >4 (7) evaluators were used in 13% and 18% of studies, respectively. In 3 studies, the simulator was able to perform an automated assessment. Expert surgeons were primarily used to perform the objective assessments with 69% (37/54) of studies using only expert assessors. Nine per cent (5/54) of studies used an expert surgeon + other less-experienced evaluators. Studies using only less-experienced evaluators were present in 4% (2/54) of studies. Seven studies either did not specify the level of evaluator or this was not applicable (i.e. time only assessment). Assessors were blinded in 46% (25/54) of studies (Fig. 5) and video recordings were used in 63% (34/54). The number of assessments performed either prospectively (i.e. at the time of task) or retrospective (i.e. following task) were almost equal (52% vs 48%).
Figure 5:

Chart demonstrating the level of assessors used in the studies (A) and whether assessors were blinded or not (B).

Chart demonstrating the level of assessors used in the studies (A) and whether assessors were blinded or not (B).

Outcomes

Thirty (56%) studies explored objective changes in technical performance (i.e. minimum of 2 sessions/attempts) with 29 (97%) demonstrating improvement. A large proportion of studies (39%, 21/54) were designed to validate the simulator or assessment tool (i.e. demonstrate construct validity) and were not focused on demonstrating objective improvement in performance (Fig. 6A). Thirty-nine per cent (21/54) of studies had established these objective assessment methods in the training curriculum of their respective training programmes (Fig. 6B).
Figure 6:

Chart demonstrating the outcome of the assessment (e.g. improvement in technical skill performance) (A) and whether the assessment method was incorporated into the training curricula (B).

Chart demonstrating the outcome of the assessment (e.g. improvement in technical skill performance) (A) and whether the assessment method was incorporated into the training curricula (B).

METHODOLOGICAL QUALITY

The mean ± standard deviation MERSQI score for all studies was 13.6 ± 1.5 (maximum score: 18). Fifty-two per cent (28/54) of studies were performed by a single institution with the rest involving ≥2 institutions. Most studies had a very high response rate with 96% (52/54) reporting a >75% response. Over 68% (37/54) of assessment tools demonstrated either a moderate (48%, 26/54) or high validity (20%, 11/54) due to the inclusion of content/construct validity and internal structure in their development and evaluation. Ninety-one per cent (49/54) of studies used more than descriptive statistics, which was deemed appropriate on review. The majority of studies (87%, 47/54) assessed knowledge/skills with 13% (7/54) being performed on real-life patients. No study used patient outcomes as a measure of technical skill performance.

DISCUSSION

Within this review, 85% of assessments were made in the setting of simulation with 15% being in the intraoperative environment. Although simulation assessments are more readily available and can be performed outside of the operating environment, intraoperative assessments provide trainee surgeons with real-life experience and the associated pressures. In the ideal training curriculum, a trainee surgeon would initially train on simulators to refine their technical skills, fluency and operative sequencing. Once proficient they would transfer these learnt skills into the intraoperative environment, where further objective assessment is performed to focus on learning and progression. The 2 most commonly assessed tasks were in coronary anastomosis (28%) and pulmonary lobectomy (19%), which are the fundamental index procedures. Surprisingly, there was only one study which included objective assessment in aortic valve surgery and none in cardiothoracic transplantation. Although validated aortic valve simulators exist, the lack of studies using objective assessments may be related to no assessment tool being available and/or validated, which may be a focus for future work [3, 5].

Improvement in objective assessment scores

One crucial aspect to simulation training is the demonstration of improvement and surgical skill progression. Not only does this allow simulators to be validated but it can also be used to identify when a trainee has gained competence in a particular skill and can progress to the next step in their training. Furthermore, the regular objective assessment will allow surgeons to focus their ongoing training needs which will potentially streamline the efficiency of technical skill acquisition. Fifty-six per cent of studies explored objective changes in technical performance with 97% of these (all except one) demonstrating improvement. The primary goal of the remaining studies was to validate the assessment tool rather than demonstrate the effects of objective assessment.

MERSQI score

The mean MERSQI score across all 54 studies was 13.6. Using a score of 14 as a cut-off for high-quality research 39% (21/54) of studies fell into this category [14]. Although the majority of studies were non-randomized, nearly half of the studies involved more than one institution with a high sampling rate (>75%). The majority of studies demonstrated both moderate or high validity and reliability. Validity refers to whether a test measures what it intends to test, whereas reliability refers to the precision of the assessment (i.e. if the assessment was repeated would it produce the same result). Most studies included a combination of the following methods: (i) face ± content validity (i.e. assessment tool contents were reviewed by experts to deem if they assessed what they intended to ± appropriateness), (ii) internal structure (i.e. assessment of inter-rater reliability, or test–retest) and (iii) construct validity (i.e. demonstrating a difference in performance between expert and trainee/junior surgeons). Although no study demonstrated the actual effect on real patients, all studies assessed technical skill with 13% assessing performance on real patients. This is further evidence that the use of objective assessment methods in technical assessment evaluation renders itself to high-quality educational research, encouraging such methods to be incorporated into training programmes if feasible.

Ideal assessment tool for objective assessment in cardiothoracic surgery

The evaluation of technical skill is crucial for CTS; however, the ability to generate objective assessment from unbiased experts and create reproducible results remains challenging. Various methods have been described to address this. The Likert scale utilizing an OSATS format was the most commonly used assessment tool with checklist methods appearing infrequently. This is likely related to the Likert scale being a thoroughly validated tool and its general format makes it easier to be tailored to multiple surgical procedures [6, 16–20]. The disadvantage of this method is the limitation on feedback it may provide the trainee surgeon as it does not focus on the specific aspects of the technical skill the surgeon failed on. The evaluator is an important factor to consider in establishing objective assessment. Within this review, the majority of studies (63%) used ≤2 evaluators and experts were primarily used (69%). Although the use of expert assessors is the most robust method of assessment their reliance adds an additional limitation to its regular use. Ideally, objective assessments should be validated to be performed automatically or by less-experienced personnel to increase use. A number of studies have demonstrated that objective improvement can occur without supervisor/expert supervision within CTS [16, 21, 22]. Crowd-sourced evaluations using lay persons may be a potential solution with studies demonstrating that as well as being cost-effective and efficient, can generate results comparable to experts [23]. Barriers to crowd-sourcing include cost as local experts usually provide feedback free of charge, albeit their availability may be limited [23]. However, if such methods are used to assess trainee progression, there must be input from experts, whose assessment is crucial to assess the holistic aspects of performance and ensure continued assessment validity and reliability. Video recordings were used in most studies (63%), which maybe a potential solution to the above limitation as it allows retrospective, blinded assessment which was apparent in 46% of studies. If methods like crowd-sourcing are established it could potentially allow trainees to benefit from regular, unbiased, objective assessments in an efficient and cost-effective manner and promote better utilization of expert surgeon input [23-25]. In order for objective assessments to be successfully incorporated into training curricula, there is a need for the tool to be reproducible, easy to use and potentially have the ability to be assessed by less-experienced evaluators. There are a number of examples within this review which meet these criteria and are available for training programmes to adopt now. Lou et al. [16] describe the JCSTE (Joint Council on Thoracic Surgery Education) coronary anastomosis assessment tool, which utilizes the OSATS method. Hermsen et al. [26] and Hicks et al. [27] utilized the checklist method to assess the establishment of cardiopulmonary bypass and crisis management. Within thoracic surgery, the OSATS-based VATSAT (Video-Assisted Thoracoscopic Surgery Assessment tool) is a validated tool for pulmonary resection [18]. In congenital heart surgery, the HOST-CHS (Hands-On Surgical Training-in-Congenital Heart Surgery) assessment tool uses a hybrid checklist and Likert scale to objectively assess technical performance across the spectrum of congenital heart surgical procedures [4].

Objective assessments within cardiothoracic surgery curricula

Despite concerted efforts to incorporate objective technical skill assessment into CTS training curricula, its utilization worldwide is lacking. Although this study demonstrated that only 39% of studies had incorporated assessment methods into their local curricula, this may be under-represented as a large proportion of studies were for validation purposes only. However, when considering this number as a proportion of CTS institutions worldwide it is likely that is an overestimation due to selection bias. Institutions that are able to conduct such research and publish their experiences are more likely to be actively involved in incorporating assessments into their curricula. Conversely, poorly represented institutions/countries in the literature are unlikely to be involved in or utilize such methods and should be a focus of future research. Validated objective assessment methods will provide more granular evaluations of surgical performance, which can be provided through both simulation and intraoperative environments as demonstrated in this review. Although this review sought to comprehensively evaluate the use of objective assessment in technical skill evaluation in CTS, it is not without its limitations. Firstly, there was an exclusion of non-English text which potentially leads to selection bias. Secondly, only published articles were reviewed which leads to potential publication bias. Due to the nature of the studies and lack of data incorporated made more detailed quantitative synthesis not feasible. This paper purposely focuses on the technical skill performance of trainee surgeons, which is only one aspect on becoming a competent cardiothoracic surgeon albeit an important one. Trainee surgeons must not only show technical perfection but also demonstrate competency in non-technical skills. This includes clinical judgement, critical thinking, academic scholarship/education and competency in emotional capabilities like empathy and emotional intelligence [28]. Not all of these skills can be trained in simulation environments or objectively assessed and can only be achieved during a surgeon’s lifetime through experience. However, dedicated training alongside objective assessment is the key to perform technically on a high level. The validated evaluation methods included in this review are helpful to assess technical improvement after simulation training. However, questions remain regarding the predictive validity and the translation to real-patient performance [28]. The constructs of real-life operations are vastly different to the low-risk provided during simulation. There is a need to establish if improvements in simulation are translatable to intraoperative performance and should be the focus of future research.

CONCLUSION

The reductions in training time and intraoperative exposure for trainees in CTS has driven the growth in simulation-based training and the validation of objective assessment methods of technical skill. Although most studies were for validation purposes only, out of the studies that investigated performance outcomes all but one demonstrated an objective improvement in technical skill performance. Despite these assessment tools being available its adoption in training, curricula is still sparse. With the current and future challenges to training, there is a greater need to incorporate objective technical skill assessments. These will eventually help increase the efficiency and transparency of training programmes and ensure the development of the next generation of cardiothoracic surgeons.

SUPPLEMENTARY MATERIAL

Supplementary material is available at ICVTS online. Conflict of interest: none declared. Click here for additional data file.
  70 in total

1.  Reliable and valid assessment of performance in thoracoscopy.

Authors:  Lars Konge; Per Lehnert; Henrik Jessen Hansen; René Horsleben Petersen; Charlotte Ringsted
Journal:  Surg Endosc       Date:  2011-12-17       Impact factor: 4.584

2.  Crowdsourcing Assessment of Surgeon Dissection of Renal Artery and Vein During Robotic Partial Nephrectomy: A Novel Approach for Quantitative Assessment of Surgical Performance.

Authors:  Mary K Powers; Aaron Boonjindasup; Michael Pinsky; Philip Dorsey; Michael Maddox; Li-Ming Su; Matthew Gettman; Chandru P Sundaram; Erik P Castle; Jason Y Lee; Benjamin R Lee
Journal:  J Endourol       Date:  2015-12-30       Impact factor: 2.942

Review 3.  Simulation in cardiothoracic surgical training: where do we stand?

Authors:  Kanika Trehan; Clinton D Kemp; Stephen C Yang
Journal:  J Thorac Cardiovasc Surg       Date:  2014-01       Impact factor: 5.209

4.  A model of cardiopulmonary bypass staged training integrating technical and non-technical skills dedicated to cardiac trainees.

Authors:  V Fouilloux; F Doguet; A Kotsakis; A Dubrowski; S Berdah
Journal:  Perfusion       Date:  2014-05-19       Impact factor: 1.972

5.  Training less-experienced faculty improves reliability of skills assessment in cardiac surgery.

Authors:  Xiaoying Lou; Richard Lee; Richard H Feins; Daniel Enter; George L Hicks; Edward D Verrier; James I Fann
Journal:  J Thorac Cardiovasc Surg       Date:  2014-09-16       Impact factor: 5.209

6.  Evaluation of robotic cardiac surgery simulation training: A randomized controlled trial.

Authors:  Matthew Valdis; Michael W A Chu; Christopher Schlachta; Bob Kiaii
Journal:  J Thorac Cardiovasc Surg       Date:  2016-02-13       Impact factor: 5.209

7.  Commentary: Surgical skill assessment: Time to examine?

Authors:  Elizabeth H Stephens; Joseph A Dearani
Journal:  J Thorac Cardiovasc Surg       Date:  2020-01-11       Impact factor: 5.209

8.  High-Fidelity Emergency Department Thoracotomy Simulator With Beating-Heart Technology and OSATS Tool Improves Trainee Confidence and Distinguishes Level of Skill.

Authors:  Jordan D Bohnen; Leah Demetri; Eva Fuentes; Kathryn Butler; Reza Askari; Rahul J Anand; Emil Petrusa; Haytham M A Kaafarani; D Dante Yeh; Noelle Saillant; David King; Susan Briggs; George C Velmahos; Marc de Moya
Journal:  J Surg Educ       Date:  2018-02-26       Impact factor: 2.891

9.  Thoracic surgery skill proficiency with chest wall tumor simulator.

Authors:  M Blair Marshall; Brette M Wilson; Yvonne M Carter
Journal:  J Surg Res       Date:  2011-03-01       Impact factor: 2.192

10.  Tissue-based coronary surgery simulation: medical student deliberate practice can achieve equivalency to senior surgery residents.

Authors:  Jonathan C Nesbitt; Jamii St Julien; Tarek S Absi; Rashid M Ahmad; Eric L Grogan; Jorge M Balaguer; Eric S Lambright; Stephen A Deppen; Huiyun Wu; Joe B Putnam
Journal:  J Thorac Cardiovasc Surg       Date:  2013-03-15       Impact factor: 5.209

View more
  1 in total

Review 1.  Congenital Heart Surgery Skill Training Using Simulation Models: Not an Option but a Necessity.

Authors:  Shi-Joon Yoo; Nabil Hussein; David J Barron
Journal:  J Korean Med Sci       Date:  2022-10-03       Impact factor: 5.354

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.