Literature DB >> 32011823

Systematic review of learning curves in robot-assisted surgery.

N A Soomro1, D A Hashimoto2, A J Porteous3, C J A Ridley4, W J Marsh4, R Ditto5, S Roy5.   

Abstract

BACKGROUND: Increased uptake of robotic surgery has led to interest in learning curves for robot-assisted procedures. Learning curves, however, are often poorly defined. This systematic review was conducted to identify the available evidence investigating surgeon learning curves in robot-assisted surgery.
METHODS: MEDLINE, Embase and the Cochrane Library were searched in February 2018, in accordance with PRISMA guidelines, alongside hand searches of key congresses and existing reviews. Eligible articles were those assessing learning curves associated with robot-assisted surgery in patients.
RESULTS: Searches identified 2316 records, of which 68 met the eligibility criteria, reporting on 68 unique studies. Of these, 49 assessed learning curves based on patient data across ten surgical specialties. All 49 were observational, largely single-arm (35 of 49, 71 per cent) and included few surgeons. Learning curves exhibited substantial heterogeneity, varying between procedures, studies and metrics. Standards of reporting were generally poor, with only 17 of 49 (35 per cent) quantifying previous experience. Methods used to assess the learning curve were heterogeneous, often lacking statistical validation and using ambiguous terminology.
CONCLUSION: Learning curve estimates were subject to considerable uncertainty. Robust evidence was lacking, owing to limitations in study design, frequent reporting gaps and substantial heterogeneity in the methods used to assess learning curves. The opportunity remains for the establishment of optimal quantitative methods for the assessment of learning curves, to inform surgical training programmes and improve patient outcomes.
© 2019 The Authors. BJS Open published by John Wiley & Sons Ltd on behalf of BJS Society Ltd.

Entities:  

Year:  2019        PMID: 32011823      PMCID: PMC6996634          DOI: 10.1002/bjs5.50235

Source DB:  PubMed          Journal:  BJS Open        ISSN: 2474-9842


Introduction

Learning curves describe the rate of progress in gaining experience or new skills and are widely reported in surgery. Surgeons typically exhibit improvements in performance over time, often followed by a plateau where minimal/limited additional improvement is observed1. Generally, surgical learning curves are measured as a change in an operative variable (which can be considered a surrogate for surgeon performance) over a series of procedures. Studies investigating learning curves for surgical procedures are becoming increasingly important, as learning curves can have substantial impact on surgical metrics, clinical outcomes and cost–benefit decisions. There has been particular interest in learning curves in robot‐assisted surgery, especially in gynaecology and urology2, 3. Despite the reported operative benefits and improved hospital experience provided by robot‐assisted surgery compared with traditional minimally invasive approaches4, 5, uptake of robotic technology has been slow, largely due to high capital and maintenance costs, and uncertainty regarding the potential benefits of robot‐assisted approaches over conventional laparoscopic approaches. For example, robot‐assisted approaches have been associated with longer operating times for many procedure types6. A large proportion of these comparative studies, however, may have been generated from surgeons who were still learning the robotic technology in question4, potentially underestimating the full benefits of robotic assistance. Robot‐assisted approaches have the potential to expedite surgeon learning, but methods used to measure and define learning curves seem inconsistent1. Studies evaluating the learning curve for surgical procedures often aim to determine the number of sequential procedures that comprise the learning curve, or that are required to ‘overcome’ the learning curve (sometimes referred to as the learning curve length). To achieve this aim, studies often define a particular threshold in surgeon performance. A common threshold includes reaching a plateau in performance, yet the performance thresholds used are highly inconsistent1. The way learning curves are described can lead to misinterpretation. Terms such as ‘overcome’, for instance, could be considered misnomers, implying surgeons have mastered a procedure, for which certain performance thresholds may not provide sufficient evidence. For example, a plateau in performance does not necessarily equate to high‐quality performance; it only implies that a surgeon is no longer improving1. There remains a need to understand better the learning curve of robot‐assisted surgery and broadly characterize how learning curves are defined and reported. This systematic review was performed to characterize the current evidence base and appraise the methods used to define and measure learning curves for surgeons performing robot‐assisted surgery, taking a holistic, panspecialty view.

Methods

This systematic review was conducted in accordance with a prespecified protocol and the PRISMA guidelines7. MEDLINE, MEDLINE In‐Process, Embase, the Cochrane Database of Systematic Reviews, the Cochrane Central Register of Controlled Trials, and the NHS Economic Evaluation Database were searched. Surgical training has evolved alongside the rapid development of robotic technologies. As such, database searches were limited to the period from 1 January 2012 to 5 February 2018, in order to capture studies investigating learning curves in the context of training relevant to current practice. The search terms used are provided in Tables  and (supporting information). The two most recent abstract books of relevant surgical congresses were also searched from 1 January 2016 to 14 February 2018. This review considered only primary research, and excluded review articles. Supplementary hand searches of the bibliographies of relevant systematic reviews were conducted to identify any primary studies not identified elsewhere. The review process was performed by two independent reviewers, who assessed the titles and abstracts of all search results (stage 1), as well as the full texts of all potentially eligible studies identified in the first stage (stage 2). In the event of discrepancies, the two reviewers came to a consensus for each decision. In the absence of a consensus, a third independent reviewer resolved any disagreements. Eligible publications included any randomized or non‐randomized, comparative or observational studies involving wet‐ or dry‐lab testing, simulations, patients or registry/economic analyses that performed a learning curve analysis (consisting of a graph and/or reported data for at least 4 time points) of surgeons performing robot‐assisted surgery in any specialty. Studies were required to report learning curve results from more than one surgeon alone or as part of a surgical team, of any specialization (robot‐assisted, laparoscopic or open). In the absence of reporting the number of surgeons involved, included studies were required to have multiple authors. Only studies that included at least 20 surgical procedures in the analysis of the learning curve were considered. At least one of the following metrics had to be reported: time to plateau/number of ‘phases’ in the learning curve; statistical differences in metrics assessed over time; or learning percentages. Detailed eligibility criteria are shown in Table  (supporting information). The studies reported in this review are restricted to learning curve analyses of procedures on patients.

Data extraction and quality assessment

For each eligible study, data were extracted into a prespecified grid by one reviewer, with verification by a second, independent reviewer. Where there was a discrepancy, the two reviewers attempted to come to a consensus; a third reviewer resolved any disagreements in the absence of a consensus. Captured data included study design, methodology, surgeon experience, robotic technology used and the metric measured to evaluate the learning curve. Information relating to the learning curve itself was captured, including the number of phases of the curve, the number of operations per phase and the number of procedures to overcome the learning curve (denoted in this review as the point where the chosen performance threshold was considered to have been overcome). Where reported, the specific performance threshold used was captured. If the learning curve had not been overcome within the study period, the number of procedures to overcome the learning curve was reported to be greater than the total number included in the study period. The quality of each eligible study was assessed using either the UK National Institute for Health and Care Excellence (NICE) RCT checklist8 or a modified version of the Downs and Black checklist for non‐randomized studies9.

Results

A total of 2316 records from electronic database searches, conference abstract searches and hand searches were identified. Of these, 281 full‐text articles were assessed for eligibility, of which 213 were excluded (Table  , supporting information). The remaining 68 records (reporting on 68 unique studies) were found to meet the eligibility criteria, 49 of which reported on patient data and are presented here (Fig. 1).
Figure 1

PRISMA diagram for the systematic literature review

PRISMA diagram for the systematic literature review

Characteristics of included articles

Characteristics of the 49 eligible studies presenting learning curves derived from patient procedures are presented in Table 1 10–58. All were observational in design. Data were analysed retrospectively in 40 of 49 studies (82 per cent), and the remaining nine studies (18 per cent) were prospective in design. The majority (35 of 49, 71 per cent) were single‐arm studies, and the remainder (14 of 49, 29 per cent) were comparative. Among the 41 studies that explicitly defined the number of robotic surgeons per study, the number of participating robotic surgeons was generally small. Most studies (33 of 41, 80 per cent) included fewer than five robotic surgeons, over half of which (18 of 33, 55 per cent) included fewer than three (Fig. 2). The captured studies spanned ten surgical specialties (Fig. 3). Learning curves were reported most frequently for urology, general surgery and gynaecology.
Table 1

Details of studies included in the systematic literature review

ReferenceDesignSurgical specialtyProcedure(s)/task(s) performedStudy arms (surgeon experience)* No. of robotic surgeons
Albergotti et al.10 Retrospective single‐arm observationalORLTORSSingle arm3
Arora et al.11 Retrospective single‐arm observationalUrologyRKTSingle armn.r.
Benizri et al.12 Retrospective controlled observationalGeneralRDP versus LDPRDP2
Bindal et al.13 Retrospective single‐arm observationalBariatricLRRYGB and TRRYGBSingle arm2
Binet et al.14 Retrospective single‐arm observationalPaediatricRobot‐assisted fundoplicationsSingle arm2
Boone et al.15 Retrospective single‐arm observationalGeneralRPDSingle arm4
Chang et al.16 Retrospective controlled observationalUrologyRARPRARP (open experience)1
RARP (open/laparoscopic experience)1
RARP (laparoscopic experience)1
Ciabatti et al.17 Prospective single‐arm observationalORLTransaxillary robot‐assisted thyroid surgerySingle armn.r.
D'Annibale et al.18 Retrospective single‐arm observationalUrologyRobot‐assisted adrenalectomySingle armn.r.
Davis et al.19 Retrospective controlled observationalUrologyRARP versus ORPRARP744
Dhir et al.20 Retrospective single‐arm observationalGeneralRobot‐assisted HAI pump placementSingle armn.r.
Esposito et al.21 Retrospective single‐arm observationalPaediatricREVURSingle arm4
Fahim et al.22 Retrospective single‐arm observationalThoracicRATSSingle arm3
Fossati et al.23 Retrospective single‐arm observationalUrologyRARPSingle arm4
Geller et al.24 Retrospective single‐arm observationalGynaecologyRSCSingle arm2
Good et al.25 Retrospective controlled observationalUrologyRARP versus LRPRARP1
Goodman et al.26 Retrospective single‐arm observationalCardiovascularRobot‐assisted mitral valve repairSingle arm2
Guend et al.27 Retrospective single‐arm observationalColorectalRobotic colorectal resectionSingle arm4
Kamel et al.28 Retrospective single‐arm observationalThoracicRATSingle arm4
Kim et al.29 Retrospective controlled observationalColorectalRRSRRS (laparoscopically inexperienced)1
RRS (laparoscopically experienced)1
Lebeau et al.30 Prospective controlled observationalUrologyRARPRARP (expert laparoscopic surgeons)1
RARP (junior surgeons)1
Linder et al.31 Retrospective single‐arm observationalGynaecologyRSCSingle arm2
Lopez et al.32 Retrospective controlled observationalGynaecologyRobot‐assisted single‐site laparoscopic hysterectomy versus LESS hysterectomyRobot‐assisted single‐site laparoscopic hysterectomy3
Lovegrove et al.33 Prospective single‐arm observationalUrologyRARPSingle arm15
Luciano et al.34 Retrospective controlled observationalGynaecologyRobot‐assisted versus laparoscopic versus vaginal versus abdominal hysterectomyRobot‐assisted hysterectomy1315
Meyer et al.35 Retrospective single‐arm observationalThoracicRobotic lobectomySingle arm2
Myers et al.36 Retrospective single‐arm observationalGynaecologyRSCSingle arm2
Nelson et al.37 Retrospective single‐arm observationalGeneralRCSingle arm8
Odermatt et al.38 Retrospective controlled observationalColorectalRobot‐assisted TME versus laparoscopic TMERobot‐assisted TME2
Park et al.39 Prospective single‐arm observationalGeneralLess than total robot‐assisted thyroidectomySingle arm2
Park et al.40 Retrospective single‐arm observationalGeneralRAGSingle arm3
Paulucci et al.41 Retrospective single‐arm observationalUrologyRAPNSingle arm2
Pietrabissa et al.42 Prospective single‐arm observationalGeneralSSRCSingle arm5
Pulliam et al.43 Retrospective controlled observationalGynaecologyRSC versus LSCRSC3
Riikonen et al.44 Retrospective single‐arm observationalUrologyRALPSingle arm12
Sarkaria et al.45 Retrospective single‐arm observationalGeneralRA‐GPEHRSingle armn.r.
Schatlo et al.46 Retrospective single‐arm observationalOrthopaedicRobot‐assisted placement of pedicle screwsSingle arm13
Shakir et al.47 Retrospective single‐arm observationalGeneralRDPSingle arm3
Sivaraman et al.48 Retrospective controlled observationalUrologyRARP versus LRPRARP9
Sood et al.49 Prospective controlled observationalUrologyRKTRKT (extensive RKT experience, limited OKT experience)1
RKT (extensive RKT and OKT experience)1
RKT (limited RKT experience, extensive OKT experience)1
Tasian et al.50 Prospective controlled observationalUrologyRobot‐assisted pyeloplastyRobot‐assisted pyeloplasty (fellow surgeon)4
Robot‐assisted pyeloplasty (attending surgeon)1
Tobis et al.51 Retrospective single‐arm observationalUrologyRAPNSingle arm3
van der Poel et al.52 Retrospective single‐arm observationalUrologyLND during RARPSingle arm2
Vidovszky et al.53 Prospective single‐arm observationalGeneralSSRCSingle armn.r.
White et al.54 Prospective single‐arm observationalORLTORSSingle armn.r.
Woelk et al.55 Retrospective single‐arm observationalGynaecologyRobot‐assisted hysterectomySingle arm2
Wolanski et al.56 Retrospective controlled observationalUrologyRALP versus LRPRALP2
Zhou et al.57 Retrospective single‐arm observationalGeneralRAGSingle arm2
Zureikat et al.58 Retrospective single‐arm observationalGeneralRobot‐assisted pancreatic resectionsSingle armn.r.

Surgeon experience only stated in studies with multiple robotic study arms. ORL, otorhinolaryngology; TORS, transoral robot‐assisted surgery; RKT, robot‐assisted kidney transplantation; n.r., not reported; RDP, robot‐assisted distal pancreatectomy; LDP, laparoscopic distal pancreatectomy; LRRYGB, laparoscopic robot‐assisted Roux‐en‐Y gastric bypass; TRRYGB, totally robot‐assisted Roux‐en‐Y gastric bypass; RPD, robot‐assisted pancreatoduodenectomy; RARP, robot‐assisted radical prostatectomy; ORP, open radical prostatectomy; HAI, hepatic artery infusion; REVUR, robot‐assisted extravesical ureteral reimplantation; RATS, robot‐assisted thoracic surgery; RSC, robot‐assisted sacrocolpopexy; LRP, laparoscopic radical prostatectomy; RAT, robot‐assisted thymectomy; RRS, robot‐assisted rectal cancer surgery; LESS, laparoendoscopic single‐site; RC, robot‐assisted cholecystectomy; TME, total mesorectal excision; RAG, robot‐assisted gastrectomy; RAPN, robot‐assisted partial nephrectomy; SSRC, single‐site robot‐assisted cholecystectomy; LSC, laparoscopic sacrocolpopexy; RALP, robot‐assisted laparoscopic prostatectomy; RA‐GPEHR, robot‐assisted giant para‐oesophageal hernia repair; RKT, robot‐assisted kidney transplantation; OKT, open kidney transplantation; LND, lymph node dissection.

Figure 2

Bar chart showing the number of surgeons in robot‐assisted study arms per study One study enrolled only a single robot‐assisted surgeon, but was considered eligible because the total number of enrolled surgeons was greater than one (the study also evaluated a surgeon who performed procedures laparoscopically).

Figure 3

Pie chart of surgical specialties captured in the systematic literature review Values indicate the number of studies involving the specialty.

Details of studies included in the systematic literature review Surgeon experience only stated in studies with multiple robotic study arms. ORL, otorhinolaryngology; TORS, transoral robot‐assisted surgery; RKT, robot‐assisted kidney transplantation; n.r., not reported; RDP, robot‐assisted distal pancreatectomy; LDP, laparoscopic distal pancreatectomy; LRRYGB, laparoscopic robot‐assisted Roux‐en‐Y gastric bypass; TRRYGB, totally robot‐assisted Roux‐en‐Y gastric bypass; RPD, robot‐assisted pancreatoduodenectomy; RARP, robot‐assisted radical prostatectomy; ORP, open radical prostatectomy; HAI, hepatic artery infusion; REVUR, robot‐assisted extravesical ureteral reimplantation; RATS, robot‐assisted thoracic surgery; RSC, robot‐assisted sacrocolpopexy; LRP, laparoscopic radical prostatectomy; RAT, robot‐assisted thymectomy; RRS, robot‐assisted rectal cancer surgery; LESS, laparoendoscopic single‐site; RC, robot‐assisted cholecystectomy; TME, total mesorectal excision; RAG, robot‐assisted gastrectomy; RAPN, robot‐assisted partial nephrectomy; SSRC, single‐site robot‐assisted cholecystectomy; LSC, laparoscopic sacrocolpopexy; RALP, robot‐assisted laparoscopic prostatectomy; RA‐GPEHR, robot‐assisted giant para‐oesophageal hernia repair; RKT, robot‐assisted kidney transplantation; OKT, open kidney transplantation; LND, lymph node dissection. Bar chart showing the number of surgeons in robot‐assisted study arms per study One study enrolled only a single robot‐assisted surgeon, but was considered eligible because the total number of enrolled surgeons was greater than one (the study also evaluated a surgeon who performed procedures laparoscopically). Pie chart of surgical specialties captured in the systematic literature review Values indicate the number of studies involving the specialty.

Learning curve metrics

Time‐based metrics were the most commonly reported variables used to assess the learning curve, reported by 42 of the 49 studies (86 per cent). Other measures, including length of hospital stay, morbidity and mortality rates, and procedure‐specific metrics, were reported less commonly (Fig. 4). Of the categories of metrics captured, duration of surgery, length of stay (LOS) and complication rate were reported most frequently within each category. The number of procedures required to overcome the learning curve for these metrics is shown in Table 2.
Figure 4

Bar chart of the learning curve metrics assessed across the captured studies LOS, length of stay.

Table 2

Learning curve results for duration of surgery, length of stay and complication rate

MetricProcedureNo. of robotic surgeonsProcedures to overcome learning curve (subgroup)* Specific threshold in surgeon performanceReference
Operating time Urology
RARP2> 15 (expert laparoscopic surgeon)No. of procedures to reach break pointLebeau et al.30
15 (junior surgeon)
3140 (open experience)No. of procedures to reach plateauChang et al.16
40 (open/laparoscopic experience)
> 79 (laparoscopic experience)
Robot‐assisted pyeloplasty437 (fellows)No. of procedures to achieve median operating time of an attending surgeonTasian et al.50
Robot‐assisted adrenalectomyn.r.12No. of procedures to reach plateauD'Annibale et al.18
RAPN3> 100Identifying a plateau effectTobis et al.51
RALP220No. of procedures to approach median operative duration of LRP and presence of a plateau effectWolanski et al.56
General surgery
Robot‐assisted HAI pump placementn.r.8No. of procedures to reach a decline in CUSUM curveDhir et al.20
RPD480No. of procedures to reach a decline in CUSUM curveBoone et al.15
n.r.> 132n.r.Zureikat et al.58
Robot‐assisted thyroidectomy219; 20No. of procedures to reach plateauPark et al.39
RDP2> 11n.r.Benizri et al.12
340No. of procedures to reach a decline in CUSUM curveShakir et al.47
n.r.> 83n.r.Zureikat et al.58
RAG38·2No. of procedures to reach stabilizationPark et al.40
214; 21No. of procedures to reach a decline in CUSUM curveZhou et al.57
RC8> 6n.r.Nelson et al.37
SSRC50n.r.Pietrabissa et al.42
Gynaecology
RSC2> 24; 60No. of procedures to reach plateauLinder et al.31
30Moving block technique to detect a significant drop‐offPulliam et al.43
Robot‐assisted hysterectomy1315> 150n.r.Luciano et al.34
Colorectal
Robot‐assisted colorectal resection474 (early adapter)No. of procedures to reach a CUSUM curve phase changeGuend et al.27
25–30 (later adapters)
Robot‐assisted TME27; 15No. of procedures to achieve comparable performance to laparoscopy via CUSUM analysisOdermatt et al.38
RRS217 (laparoscopically inexperienced)No. of procedures to reach plateauKim et al.29
0 (laparoscopically experienced)
Thoracic
RAT415–20No. of procedures to reach plateauKamel et al.28
Robot‐assisted lobectomy215No. of procedures to reach plateauMeyer et al.35
Paediatric
Laparoscopic robot‐assisted fundoplications225No. of procedures to reach plateauBinet et al.14
REVUR47–8No. of procedures to reach plateauEsposito et al.21
Cardiovascular
Robot‐assisted mitral valve repair2> 404No. of procedures to reach plateauGoodman et al.26
Length of stay Colorectal
Robot‐assisted TME20; 15No. of procedures to achieve comparable performance to laparoscopy via CUSUM analysisOdermatt et al.38
Thoracic
Robot‐assisted lobectomy2> 185No. of procedures to reach plateauMeyer et al.35
Complications Urology
LND during RALP2> 400No. of procedures to reach plateauvan der Poel et al.52
General surgery
Robot‐assisted pancreatic resectionsn.r.> 132n.r.Zureikat et al.58
Gynaecology
RSC20Proficiency set at less than 10% complication rateMyers et al.36
2> 24; 84Proficiency defined as point where CUSUM curve crossed and consistently stayed below reference line of 0 (based on expected complication rate)Linder et al.31
Robot‐assisted hysterectomy212; 14Proficiency defined as the point where CUSUM curve crossed lower control limit H0 Woelk et al.55
1315> 150n.r.Luciano et al.34
Colorectal
Robot‐assisted TME20; 15No. of procedures to achieve comparable performance to laparoscopy via CUSUM analysisOdermatt et al.38
Cardiovascular
Robot‐assisted mitral valve repair2> 404No. of procedures to reach plateauGoodman et al.26

Studies that did not report whether the learning curve had or had not been overcome within the study period were not included in this table.

For studies that reported a consistent improvement in metrics across the course of the study, it was assumed that the learning curve had not been overcome within the study period. If the learning curve had not been overcome, the number of procedures to overcome the learning curve was reported to be greater than (>) the total number of procedures in the study period. If the learning curve was reported to have been overcome (or surgeons were reported to be proficient/competent) before study initiation, the number of procedures required to overcome the learning curve was recorded as zero. Where results were reported separately for individual surgeons with no clear differences in previous experience, learning curve estimates are reported separately, separated by a semicolon; where the experience of surgeons was intentionally different, individual experience is reported as separate rows and experience level is stated in brackets. RARP, robot‐assisted radical prostatectomy; n.r., not reported; RAPN, robot‐assisted partial nephrectomy; RALP, robot‐assisted laparoscopic prostatectomy; LRP, laparoscopic radical prostatectomy; HAI, hepatic artery infusion; CUSUM, cumulative sum; RPD, robot‐assisted pancreatoduodenectomy; RDP, robot‐assisted distal pancreatectomy; RAG, robot‐assisted gastrectomy; RC, robot‐assisted cholecystectomy; SSRC, single‐site robot‐assisted cholecystectomy; RSC, robot‐assisted sacrocolpopexy; TME: total mesorectal excision; RRS, robot‐assisted rectal cancer surgery; RAT, robot‐assisted thymectomy; REVUR, robot‐assisted extravesical ureteral reimplantation; LND, lymph node dissection.

Bar chart of the learning curve metrics assessed across the captured studies LOS, length of stay. Learning curve results for duration of surgery, length of stay and complication rate Studies that did not report whether the learning curve had or had not been overcome within the study period were not included in this table. For studies that reported a consistent improvement in metrics across the course of the study, it was assumed that the learning curve had not been overcome within the study period. If the learning curve had not been overcome, the number of procedures to overcome the learning curve was reported to be greater than (>) the total number of procedures in the study period. If the learning curve was reported to have been overcome (or surgeons were reported to be proficient/competent) before study initiation, the number of procedures required to overcome the learning curve was recorded as zero. Where results were reported separately for individual surgeons with no clear differences in previous experience, learning curve estimates are reported separately, separated by a semicolon; where the experience of surgeons was intentionally different, individual experience is reported as separate rows and experience level is stated in brackets. RARP, robot‐assisted radical prostatectomy; n.r., not reported; RAPN, robot‐assisted partial nephrectomy; RALP, robot‐assisted laparoscopic prostatectomy; LRP, laparoscopic radical prostatectomy; HAI, hepatic artery infusion; CUSUM, cumulative sum; RPD, robot‐assisted pancreatoduodenectomy; RDP, robot‐assisted distal pancreatectomy; RAG, robot‐assisted gastrectomy; RC, robot‐assisted cholecystectomy; SSRC, single‐site robot‐assisted cholecystectomy; RSC, robot‐assisted sacrocolpopexy; TME: total mesorectal excision; RRS, robot‐assisted rectal cancer surgery; RAT, robot‐assisted thymectomy; REVUR, robot‐assisted extravesical ureteral reimplantation; LND, lymph node dissection.

Duration of surgery

Across 33 studies that investigated the learning curve based on duration of surgery, 27 reported whether the learning curve had been overcome. In 21 of these 27 studies (78 per cent), at least one of the included surgeons was reported to have overcome the learning curve, with the remaining six studies (22 per cent) stating that the learning curve had not been overcome by any surgeon within the number of procedures in the study period (Table 2). Among studies where the learning curve was not overcome by any surgeon, the number of procedures assessed over the study period varied significantly, with a range of 6–404 patients12, 26, 34, 37, 51, 58. Of the studies in which the learning curve had been overcome, 12–140 patients were needed for urological procedures16, 18, 30, 50, 56, 0–80 for general surgical procedures (where 0 suggests no apparent learning process required, if the surgeon was already experienced)15, 20, 39, 40, 42, 47, 57, 0–60 for gynaecological procedures31, 43, 0–74 for colorectal procedures27, 29, 38, 15–20 for thoracic procedures28, 35 and 7–25 for paediatric procedures14, 21. Among the 27 studies that reported whether the learning curve had been overcome, learning curve analyses were conducted for 22 unique procedures, of which only five (23 per cent) were supported by multiple studies. In such instances, the number of patients required to overcome the learning curve for these varied substantially between studies. For example, three studies evaluated the learning curve for duration of surgery for robot‐assisted distal pancreatectomy. In one study47 the learning curve was overcome after 40 patients, whereas in the other two12, 58 it had not been overcome within the study period (of 11 and 83 patients).

Length of stay

Of five studies reporting LOS, two reported on whether the learning curve had been overcome by at least one robotic surgeon (Table 2). One study38 estimated that the number of patients required to overcome the learning curve was between 0 and 15. In the other study35, the learning curve was not overcome within a study period of 185 patients.

Complications

Of nine studies assessing complications, eight reported whether the learning curve was overcome for complication rate. Of these, five26, 31, 34, 52, 58 found that the learning curve for complications had not been overcome for at least one robotic surgeon within the study period. With the exception of one study31, which included a surgeon with a study period of only 24 patients, these studies generally involved large patient numbers (132–404), and spanned urology, general surgery, gynaecology and cardiovascular specialties (Table 2). Only four studies reported that the learning curve for complications had been overcome. The numbers of procedures were estimated as: 0–84 for robot‐assisted sacrocolpopexy31, 36, 12–14 for robot‐assisted hysterectomy55 and 0–15 for robot‐assisted total mesorectal excision38.

Clinical metrics

Of the 49 studies, eight (16 per cent) evaluated whether the learning curve for clinical metrics had been overcome (Table 3). Metrics included oncology‐specific metrics such as surgical margin status and recurrence rate, and urology‐specific metrics such as urinary continence. The number of procedures to overcome the learning curve varied substantially. Of the two studies assessing urinary continence after robot‐assisted radical prostatectomy, one25 reported that 100 procedures were required to overcome the learning curve, whereas in the other23 the learning curve was not overcome by any of the four robotic surgeons, with study periods ranging from 112 to 541 patients. For all other clinical metrics, the learning curve was overcome by at least one surgeon during the study period, with a wide range of 0–300 patients to achieve this target.
Table 3

Learning curve results for clinical metrics

Metric groupSpecific metricProcedureNo. of robotic surgeonsProcedures to overcome learning curve (subgroup)* Specific threshold in surgeon performanceReference
Urinary continenceProbability of UC recovery after 1 yearRARP4> 112; > 411; > 413; > 541No. of procedures to reach plateauFossati et al.23
Early UCRARP1100No. of procedures to reach plateauGood et al.25
Renal functionSerum creatinine levelRKT30 (extensive RKT, limited OKT)No. of procedures to reach ‘transition point’ in CUSUM curveSood et al.49
0 (extensive RKT/OKT)
3 (limited RKT, extensive OKT)
Glomerular filtration rateRKT30 (extensive RKT, limited OKT)No. of procedures to reach ‘transition point’ in CUSUM curveSood et al.49
0 (extensive RKT/OKT)
3 (limited RKT, extensive OKT)
Biochemical recurrenceBiochemical recurrenceRARP9100No. of procedures to reach ‘transition point’ in CUSUM curveSivaraman et al.48
Surgical marginsPositive surgical marginsRARP9100No. of procedures to reach ‘transition point’ in CUSUM curveSivaraman et al.48
Apical positive surgical marginsRARP10No. of procedures to reach plateauGood et al.25
Node positivity rateLND during RALP2300No. of procedures to reach plateauvan der Poel et al.52
Initial margin statusTORS315; 22; > 68No. of procedures to reach inflection point in CUSUM curveAlbergotti et al.10
Final positive surgical marginsTORS325; 27; > 37No. of procedures to reach inflection point in CUSUM curveAlbergotti et al.10
Lymph node yieldNo. of removed nodesLND during RALP2250No. of procedures to reach plateauvan der Poel et al.52
Lymph node harvestRPD480No. of procedures to reach significant improvementBoone et al.15
Lymph node harvestRobot‐assisted TME20No. of procedures to achieve comparable performance to laparoscopy via CUSUM analysisOdermatt et al.38

Studies that did not report whether the learning curve had or had not been overcome within the study period are not included in this table.

For studies that reported a consistent improvement in metrics across the course of the study, it was assumed that the learning curve had not been overcome within the study period. If the learning curve had not been overcome, the number of procedures to overcome the learning curve was reported to be greater than (>) the total number of procedures in the study period. If the learning curve was reported to have been overcome (or surgeons were reported to be proficient/competent) before study initiation, the number of procedures to overcome the learning curve was recorded as zero. Where results were reported separately for individual surgeons with no clear differences in previous experience, learning curve estimates are reported separately, separated by a semicolon; where the experience of surgeons was intentionally different, individual experience is reported as separate rows and experience level is stated in brackets. UC, urinary continence; RARP, robot‐assisted radical prostatectomy; RKT, robot‐assisted kidney transplantation; OKT, open kidney transplantation; CUSUM, cumulative sum; LND, lymph node dissection; RALP, robot‐assisted laparoscopic prostatectomy; TORS, transoral robotic surgery; RPD, robot‐assisted pancreatoduodenectomy; TME, total mesorectal excision.

Learning curve results for clinical metrics Studies that did not report whether the learning curve had or had not been overcome within the study period are not included in this table. For studies that reported a consistent improvement in metrics across the course of the study, it was assumed that the learning curve had not been overcome within the study period. If the learning curve had not been overcome, the number of procedures to overcome the learning curve was reported to be greater than (>) the total number of procedures in the study period. If the learning curve was reported to have been overcome (or surgeons were reported to be proficient/competent) before study initiation, the number of procedures to overcome the learning curve was recorded as zero. Where results were reported separately for individual surgeons with no clear differences in previous experience, learning curve estimates are reported separately, separated by a semicolon; where the experience of surgeons was intentionally different, individual experience is reported as separate rows and experience level is stated in brackets. UC, urinary continence; RARP, robot‐assisted radical prostatectomy; RKT, robot‐assisted kidney transplantation; OKT, open kidney transplantation; CUSUM, cumulative sum; LND, lymph node dissection; RALP, robot‐assisted laparoscopic prostatectomy; TORS, transoral robotic surgery; RPD, robot‐assisted pancreatoduodenectomy; TME, total mesorectal excision.

Within‐study comparison between metrics

Some studies in the review evaluated the learning curve using more than one metric. The number of patients to overcome the learning curve was sometimes inconsistent between metrics. For example, of the two studies35, 38 that reported whether or not the learning curve was overcome for both duration of surgery and LOS, one35 indicated that substantially greater procedural experience was required for LOS, with more than 170 additional patients required to overcome the learning curve based on this metric.

Standards of reporting

The overall standard of reporting and level of detail provided in the included studies was low, often lacking sufficient information to interpret the learning curve for robot‐assisted procedures. For example, although 34 of the 49 studies (69 per cent) made some acknowledgement relating to the previous experience of included surgeons (Table 4), the detail in the reporting was mixed, with only 17 of 49 studies (35 per cent) quantifying previous experience of robot‐assisted (8 of 49, 16 per cent), laparoscopic (13 of 49, 27 per cent) or open (6 of 49, 12 per cent) operations completed. Only four of 49 studies (8 per cent) indicated whether simulation, dry lab or cadaver training had been completed before patient enrolment.
Table 4

Studies reporting previous surgeon experience

Surgeon's previous procedure experience
ReferenceRobot‐assistedLaparoscopicOpenApproach not definedSimulated robotic training before study
Albergotti et al.10 ?
Arora et al.11 ? ?
Benizri et al.12 ?
Bindal et al.13 ?
Binet et al.14 ?
Chang et al.16 ? ?
Dhir et al.20 ?
Geller et al.24 ? ?
Good et al.25 ?
Goodman et al.26 ?
Guend et al.27 ? ?
Kamel et al.28 ?
Kim et al.29 ?
Lebeau et al.30
Lopez et al.32
Lovegrove et al.33 ?
Meyer et al.35
Odermatt et al.38
Park et al.40
Park et al.39
Pietrabissa et al.42
Pulliam et al.43 ? ?
Riikonen et al.44 ? ?
Sarkaria et al.45 ?
Shakir et al.47 ? ?
Sivaraman et al.48
Sood et al.49
Tasian et al.50 ?
Tobis et al.51
van der Poel et al.52 ?
Woelk et al.55 ? ?
Wolanski et al.56 ?
Zhou et al.57
Zureikat et al.58 ?

✓, Study quantified the amount of previous experience (procedures, days of simulated training); ?, study acknowledged previous experience but did not quantify it; ✘, study did not acknowledge previous experience.

Studies reporting previous surgeon experience ✓, Study quantified the amount of previous experience (procedures, days of simulated training); ?, study acknowledged previous experience but did not quantify it; ✘, study did not acknowledge previous experience. Variability was observed in the performance thresholds used to measure the learning curve. For example, among the 27 studies that defined whether the learning curve for duration of surgery had been overcome (Table 2), the most common performance threshold used was the number of procedures needed to reach a plateau in performance, but other thresholds included the number of procedures to reach a change in phase, or the number to achieve a predetermined skill threshold set by an expert surgeon, and some studies did not specify the performance thresholds used (Fig. 5 a). In nine of the 27 studies (33 per cent), no statistical or quantitative assessment of the learning curve was reported beyond visual fit or a qualitative description (data not reported). Similar variation in learning curve definitions was observed for analyses of LOS and complication rates (Fig. 5 b,c).
Figure 5

Pie chart of methods used to define the point at which learning curve was overcome

Pie chart of methods used to define the point at which learning curve was overcome Several studies used these methods to define whether surgeons had achieved a high level of performance, characterized by terms such as proficiency or competency. However, these terms were used inconsistently. In 13 of 14 studies (93 per cent) that employed performance terms, at least one term was used to describe the point where the learning curve had been overcome; ‘proficiency’ was used for this purpose in ten studies22, 27, 28, 31, 36, 38, 39, 42, 50, 55, ‘competency’ in five studies10, 27, 33, 36, 49 and ‘expertise’ in one study50. In the four studies that reported more than one performance term, the terms were either used interchangeably36, 50 or assigned divergent definitions33, 49. Although quality assessment of included studies (Table  , supporting information) revealed a relatively low risk of bias for several quality assessment items, risk of bias was unclear for a large proportion of the questions, suggesting poor reporting of methodology. In particular, the risk of bias with respect to the blinding of subjects, external validity of included populations and study centres, and statistical power was either high or could not be determined (at least 45 of the 49 studies, 92 per cent).

Discussion

This review identified substantial variation in the lengths of learning curves, included metrics and methods employed to assess the learning curve, as well as the reporting of the analyses and terminology used across ten surgical specialties. Reported learning curve estimates are therefore subject to substantial uncertainty, and the generalizability of these findings is limited. The results of the 49 eligible studies suggested that surgeon learning curves were complex. They varied significantly between studies, procedures and specific metrics assessed. A variety of factors could account for much of the variation in reported learning curve length. The surgeon's previous experience may have been a significant factor; in three of five studies comparing the operating time learning curves of robotic surgeons, those with greater experience required fewer procedures to overcome their learning curve16, 29, 38. Although the captured studies often compared surgeons with different experience levels, such as trainees versus those who had completed training or robotic versus laparoscopic surgeons, studies generally did not report the participants' specific grade or training experience. Robotic training programmes are becoming increasingly common to enable surgeons to overcome the learning curve faster59, 60. In addition to previous experience, participation in specific training programmes may influence the learning process. Although based on a small sample size, Guend and colleagues27 reported that a lower procedure volume was required to overcome the learning curve for robot‐assisted colorectal resections for three surgeons who had participated in an institutional training programme (25–30 procedures each), compared with the volume required for an earlier surgeon who joined the institution before the programme was established (74 procedures). Few studies, however, provided details of their training programmes, where these existed. Recent training programmes have considered innovations such as feedback loops that aim to provide specific recommendations for improvement, shortening the time required to achieve adequate performance61. Differences in procedural complexity may also have contributed to variation in the learning curves observed. In surgical practice, following initial improvement and subsequent stabilization of performance, a decline in performance is often observed62. This decline is thought to reflect the point at which, following mastery of simpler procedures, surgeons take on more challenging, technically complex procedures, that impact on a learning curve63. Some procedures are inherently more complex and challenging than others. Studies64, 65, 66 of simulated robotic training tasks have observed learning curves of different duration for tasks of varying complexity. Learning curves are likely to be influenced by numerous observed and unobserved confounders. To account better for such differences, and to permit comparisons between studies, enhanced reporting of surgeon baseline characteristics, experience and procedure complexity is required. This review has highlighted a number of limitations associated with reporting learning curves for robot‐assisted surgery. Many studies failed to describe the characteristics of the surgeons, patients or methods of assessment in sufficient detail to make valid comparisons between studies or to enable a study to be reproduced. Although the majority of captured studies were determined to be of reasonable methodological quality, the studies were observational and usually included few surgeons. These study designs are associated with significant drawbacks, particularly with respect to confounding and selection bias67, 68, although this is expected given that they are more suited to measuring learning curves. Regarding sources of bias in the included studies, the risk of bias was unclear for a large proportion of the quality assessment items, particularly in relation to blinding, external validity and statistical power, suggesting poor reporting of methodology. There was little consistency in the performance thresholds used to measure the learning curve, making between‐study comparisons challenging. A large proportion of studies measured the number of procedures required to reach a plateau in surgeon performance. Although a variety of methods can define quantitatively the point at which a plateau is reached69, 70, 71, 72, there is currently no widely accepted and validated method, and some studies used visual fit alone1. The number of procedures required to overcome learning curves reported are subject to considerable uncertainty. Several studies measured the number of procedures to achieve a threshold set by experts. These were sometimes based on the performance of expert robotic surgeons50, whereas others38, 56 included expert laparoscopic surgeons. Many studies did not report the specific performance thresholds used to measure the learning curve, precluding any ability to make comparisons. These methods were frequently used to define the points at which surgeons reached ‘proficiency’, ‘competency’ or other related terms. These terms, however, were used inconsistently or interchangeably36, 50, or used with distinct definitions33, 49. In one study49 competency was used to describe performance that reached a steady state or plateau, whereas proficiency described further improvement after plateau and mastery as the achievement of outcomes better than the set target value. These definitions are not well aligned with guidelines for assessing surgical competence73, 74, recommended by the US Accreditation Council for Graduate Medical Education and the American Board of Medical Specialties, nor the criteria developed for procedure‐based assessment in the Intercollegiate Surgical Curriculum Programme in the UK. The mismatch between the performance thresholds used to measure the learning curve and the terminology used to describe the results of the analyses can lead to misinterpretation. The term ‘overcome’ was commonly used to describe a point when surgeons reached a given performance threshold, implying a high level of performance. These thresholds, particularly time to plateau, are often simplistic and may not capture sufficient evidence about the learning process to support this implication. A plateau in performance does not always equate with high‐quality performance, as surgeons will not necessarily plateau at the same level1. Likewise, using thresholds of performance to define terms such as competence and proficiency could be considered inappropriate, as these terms also imply a specific level of performance. A recent study61 investigated proficiency‐based progression training programmes in which residents who failed to show progressive improvement (reached a plateau in performance) were not considered proficient unless they had achieved predetermined proficiency benchmarks set by experienced surgeons. The lack of consistency in methods used to describe surgical performance and the use of simplistic and inappropriate methods adds to the complexity of interpreting learning curves. Using thresholds that provide meaningful measures of surgeon performance alongside standardized terminology seems vital to realize the full potential of learning curve analyses for optimization of surgical training programmes. Time‐based variables were the most common metrics used to assess learning curves, as is the case for other systematic reviews assessing surgical learning curves in other contexts63, 75. Although common across learning curve analyses, the present review suggests that variation can exist between the learning curve profiles of different metrics for a given procedure, with recovery and safety metrics (LOS, complications) exhibiting substantially longer learning curves than those for operating time, often with continued improvement for extended periods of time after the learning curve for operating time has been overcome. Given that improvements in clinical outcomes may be important drivers for the uptake of robot‐assisted approaches, the value of comparisons based on learning curves for operating time alone is unclear. In addition, the metrics captured may not directly measure surgical performance, with surrogate markers, such as operating time and LOS, reported frequently. Real‐time automated performance metrics, coupled with machine learning algorithms to process automatically collected data, may enable more direct measurement of surgeon performance76. Several data gaps were identified in the reported data. Many of the identified studies enrolled fully trained surgeons, investigating the transferability of skills for conversion from laparoscopic or open surgery to robot‐assisted procedures. These studies may be of limited value for informing the design of surgical training programmes. The limited data investigating the learning curves of trainee surgeons may result in missed opportunities for the optimization of programmes to accelerate the training of surgeons who are novices with robot‐assisted devices. No study reported data related to the economic impact of the learning curve, such as training costs or financial impacts of suboptimal outcomes. This systematic review was a broad, exploratory search of the literature reporting on the surgeon learning curve for robot‐assisted surgery, with broad search terms and eligibility criteria. Incomplete and variable reporting created challenges for data synthesis, especially given some of the exploratory and subjective outcomes this review intended to identify. The exploratory nature of the review may have introduced a number of limitations, which may have resulted in relevant data being overlooked. For example, to include evidence of suitable quality, studies that involved fewer than 20 surgical procedures in total (across surgical approaches or surgeons) were excluded, regardless of the number of robot‐assisted procedures completed by any one surgeon or included within the learning curve analysis. Studies were captured only if they reported actual learning curve data (as a graph or table presenting at least 4 time points), so that studies reporting potentially relevant data (for example the economic impact of the learning curve) could have been excluded if they did not meet these criteria. Studies that included just a single surgeon were also excluded, as the innate differences in technical ability that may exist between surgeons were anticipated to limit the reliability of the data reported in these studies. Only the learning curves of robotic procedures were considered in this review, and although the review did not set out to compare robotic‐assisted procedures with other surgical approaches, this prevented any conclusions to be drawn regarding lengths of the learning curve for robot‐assisted versus non‐robotic procedures. Only studies originally published in English were included, and database searches were limited to 2012–2018 in order to capture evidence most relevant to present‐day training practices. Although comparisons between robot‐assisted and other surgical approaches are warranted, studies with appropriate evaluation methods, standardized terminology and necessary context are essential for robust comparisons to be made. These kinds of study should provide better estimates of learning curves for robot‐assisted procedures, enhance surgical training programmes and improve patient outcomes. Table S1. Search terms for MEDLINE, MEDLINE In‐Process, MEDLINE Epub Ahead of Print and Embase Table S2. Search terms for Cochrane Library databases (searched via Wiley Online platform) Table S3. Full eligibility criteria Table S4. List of studies excluded at full‐text review and reasons for exclusion Table S5. Quality assessment of non‐randomized articles using Downs and Black checklist Click here for additional data file.
  69 in total

1.  Effect of training frequency on the learning curve on the da Vinci Skills Simulator.

Authors:  Ute Walliczek; Arne Förtsch; Philipp Dworschak; Afshin Teymoortash; Magis Mandapathil; Jochen Werner; Christian Güldner
Journal:  Head Neck       Date:  2015-12-17       Impact factor: 3.147

2.  Learning curve assessment of robot-assisted radical prostatectomy compared with open-surgery controls from the premier perspective database.

Authors:  John W Davis; Usha S Kreaden; Jessica Gabbert; Raju Thomas
Journal:  J Endourol       Date:  2014-02-06       Impact factor: 2.942

3.  Assessment of Surgical Learning Curves in Transoral Robotic Surgery for Squamous Cell Carcinoma of the Oropharynx.

Authors:  William G Albergotti; William E Gooding; Mark W Kubik; Mathew Geltzeiler; Seungwon Kim; Umamaheswar Duvvuri; Robert L Ferris
Journal:  JAMA Otolaryngol Head Neck Surg       Date:  2017-06-01       Impact factor: 6.223

4.  The impact of robotics on the mode of benign hysterectomy and clinical outcomes.

Authors:  Anthony A Luciano; Danielle E Luciano; Jessica Gabbert; Usha Seshadri-Kreaden
Journal:  Int J Med Robot       Date:  2015-03-04       Impact factor: 2.547

5.  Application of the statistical process control method for prospective patient safety monitoring during the learning phase: robotic kidney transplantation with regional hypothermia (IDEAL phase 2a-b).

Authors:  Akshay Sood; Khurshid R Ghani; Rajesh Ahlawat; Pranjal Modi; Ronney Abaza; Wooju Jeong; Jesse D Sammon; Mireya Diaz; Vijay Kher; Mani Menon; Mahendra Bhandari
Journal:  Eur Urol       Date:  2014-03-04       Impact factor: 20.096

6.  Proficiency training on a virtual reality robotic surgical skills curriculum.

Authors:  Justin Bric; Michael Connolly; Andrew Kastenmeier; Matthew Goldblatt; Jon C Gould
Journal:  Surg Endosc       Date:  2014-06-20       Impact factor: 4.584

7.  "Alarm-corrected" ergonomic armrest use could improve learning curves of novices on robotic simulator.

Authors:  Kun Yang; Manuela Perez; Gabriela Hossu; Nicolas Hubert; Cyril Perrenot; Jacques Hubert
Journal:  Surg Endosc       Date:  2016-05-17       Impact factor: 4.584

8.  Robotic Thymectomy: Learning Curve and Associated Perioperative Outcomes.

Authors:  Mohamed K Kamel; Mohamed Rahouma; Brendon M Stiles; Abu Nasar; Nasser K Altorki; Jeffrey L Port
Journal:  J Laparoendosc Adv Surg Tech A       Date:  2017-01-25       Impact factor: 1.878

9.  Learning curve for transoral robotic surgery: a 4-year analysis.

Authors:  Hilliary N White; John Frederick; Terence Zimmerman; William R Carroll; J Scott Magnuson
Journal:  JAMA Otolaryngol Head Neck Surg       Date:  2013-06       Impact factor: 6.223

10.  The Validation of a Novel Robot-Assisted Radical Prostatectomy Virtual Reality Module.

Authors:  Patrick Harrison; Nicholas Raison; Takashige Abe; William Watkinson; Faizan Dar; Ben Challacombe; Henk Van Der Poel; Muhammad Shamim Khan; Prokar Dasgupa; Kamran Ahmed
Journal:  J Surg Educ       Date:  2017-09-30       Impact factor: 2.891

View more
  16 in total

1.  Enhancing robotic efficiency through the eyes of robotic surgeons: sub-analysis of the expertise in perception during robotic surgery (ExPeRtS) study.

Authors:  Courtney A Green; Joseph A Lin; Emily Huang; Patricia O'Sullivan; Rana M Higgins
Journal:  Surg Endosc       Date:  2022-05-17       Impact factor: 4.584

2.  Using a modified Delphi process to explore international surgeon-reported benefits of robotic-assisted surgery to perform abdominal rectopexy.

Authors:  T Keating; C A Fleming; A E Brannigan
Journal:  Tech Coloproctol       Date:  2022-08-20       Impact factor: 3.699

Review 3.  Learning curves in laparoscopic and robot-assisted prostate surgery: a systematic search and review.

Authors:  Nikolaos Grivas; Ioannis Zachos; Georgios Georgiadis; Markos Karavitakis; Vasilis Tzortzis; Charalampos Mamoulakis
Journal:  World J Urol       Date:  2021-09-04       Impact factor: 3.661

4.  Tips and Details for Successful Robotic Myomectomy: Single-Center Experience with the First 125 Cases.

Authors:  Lei Dou; Yi Zhang
Journal:  J Clin Med       Date:  2022-06-05       Impact factor: 4.964

5.  Arthroscopic Treatment of Posterior Ankle Impingement Syndrome: Mid-Term Clinical Results and a Learning Curve.

Authors:  Kazuya Sugimoto; Shinji Isomoto; Norihiro Samoto; Tomohiro Matsui; Yasuhito Tanaka
Journal:  Arthrosc Sports Med Rehabil       Date:  2021-05-15

Review 6.  A review of simulation training and new 3D computer-generated synthetic organs for robotic surgery education.

Authors:  Daniel M Costello; Isabel Huntington; Grace Burke; Brooke Farrugia; Andrea J O'Connor; Anthony J Costello; Benjamin C Thomas; Philip Dundee; Ahmed Ghazi; Niall Corcoran
Journal:  J Robot Surg       Date:  2021-09-03

Review 7.  Factors affecting the learning curve in robotic colorectal surgery.

Authors:  Shing Wai Wong; Philip Crowe
Journal:  J Robot Surg       Date:  2022-02-01

8.  State-Level Examination of Clinical Outcomes and Costs for Robotic and Laparoscopic Approach to Diaphragmatic Hernia Repair.

Authors:  Sujay Kulshrestha; Haroon M Janjua; Corinne Bunn; Michael Rogers; Christopher DuCoin; Zaid M Abdelsattar; Fred A Luchette; Paul C Kuo; Marshall S Baker
Journal:  J Am Coll Surg       Date:  2021-05-17       Impact factor: 6.532

9.  Robotic-assisted minimally invasive Ivor Lewis esophagectomy within the prospective multicenter German da Vinci Xi registry trial.

Authors:  Jan-Hendrik Egberts; Thilo Welsch; Felix Merboth; Sandra Korn; Christian Praetorius; Daniel E Stange; Marius Distler; Matthias Biebl; Johann Pratschke; Felix Nickel; Beat Müller-Stich; Daniel Perez; Jakob R Izbicki; Thomas Becker; Jürgen Weitz
Journal:  Langenbecks Arch Surg       Date:  2022-05-02       Impact factor: 2.895

Review 10.  Magnetic Resonance Guided Radiation Therapy for Pancreatic Adenocarcinoma, Advantages, Challenges, Current Approaches, and Future Directions.

Authors:  William A Hall; Christina Small; Eric Paulson; Eugene J Koay; Christopher Crane; Martijn Intven; Lois A Daamen; Gert J Meijer; Hanne D Heerkens; Michael Bassetti; Stephen A Rosenberg; Katharine Aitken; Sten Myrehaug; Laura A Dawson; Percy Lee; Cihan Gani; Michael David Chuong; Parag J Parikh; Beth A Erickson
Journal:  Front Oncol       Date:  2021-05-11       Impact factor: 5.738

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.