| Literature DB >> 35311962 |
Guillaume Kugener1, Dhiraj J Pangal1, Tyler Cardinal1, Casey Collet1, Elizabeth Lechtholz-Zey1, Sasha Lasky1, Shivani Sundaram1, Nicholas Markarian1, Yichao Zhu2, Arman Roshannai1, Aditya Sinha1, X Y Han3, Vardan Papyan4, Andrew Hung5, Animashree Anandkumar6, Bozena Wrobel7, Gabriel Zada1, Daniel A Donoho8.
Abstract
Importance: Surgical data scientists lack video data sets that depict adverse events, which may affect model generalizability and introduce bias. Hemorrhage may be particularly challenging for computer vision-based models because blood obscures the scene. Objective: To assess the utility of the Simulated Outcomes Following Carotid Artery Laceration (SOCAL)-a publicly available surgical video data set of hemorrhage complication management with instrument annotations and task outcomes-to provide benchmarks for surgical data science techniques, including computer vision instrument detection, instrument use metrics and outcome associations, and validation of a SOCAL-trained neural network using real operative video. Design, Setting, and Participants: For this quailty improvement study, a total of 75 surgeons with 1 to 30 years' experience (mean, 7 years) were filmed from January 1, 2017, to December 31, 2020, managing catastrophic surgical hemorrhage in a high-fidelity cadaveric training exercise at nationwide training courses. Videos were annotated from January 1 to June 30, 2021. Interventions: Surgeons received expert coaching between 2 trials. Main Outcomes and Measures: Hemostasis within 5 minutes (task success, dichotomous), time to hemostasis (in seconds), and blood loss (in milliliters) were recorded. Deep neural networks (DNNs) were trained to detect surgical instruments in view. Model performance was measured using mean average precision (mAP), sensitivity, and positive predictive value.Entities:
Mesh:
Year: 2022 PMID: 35311962 PMCID: PMC8938712 DOI: 10.1001/jamanetworkopen.2022.3177
Source DB: PubMed Journal: JAMA Netw Open ISSN: 2574-3805
Figure 1. Endoscopic Images of Internal Carotid Artery (ICA) Injury Management
An actual ICA and steps to achieve hemostasis in the cadaver model. Surgical instruments were hand annotated in each video frame with bounding boxes using the annotation tool Vott. White arrows indicate injury; blue arrows, instrument.
Simulated Outcomes Following Carotid Artery Laceration Video Data Set Instrument Instances, Training, Validation, and Test Sets
| Variable | All videos | Training | Validation | Test |
|---|---|---|---|---|
| No. of trials | 147 | 124 | 9 | 14 |
| No. of frames | 31 443 | 27 223 | 2292 | 1928 |
| No. of instruments | ||||
| Suction | 22 356 | 19 106 | 1862 | 1388 |
| Grasper | 15 943 | 13 576 | 1084 | 1283 |
| String | 11 917 | 9944 | 1214 | 759 |
| Cottonoid | 10 005 | 8491 | 610 | 904 |
| Muscle | 4560 | 3681 | 223 | 656 |
| Tool | 76 | 76 | 0 | 0 |
| Drill | 210 | 159 | 0 | 51 |
| Scalpel | 4 | 4 | 0 | 0 |
Figure 2. Deep Learning Instrument Detection Model Performance
Orange lines represent the YOLOv3 deep neural network; blue lines represent the RetinaNet deep neural network. mAP indicates mean average precision.
Preliminary Instrument Use Patterns and Corresponding Variance of Blood Loss
| Instrument use pattern feature | Effect size | Variance of blood loss | |
|---|---|---|---|
| Time to hemostasis | 2.9 | 49.84 | .001 |
| Frames with | |||
| Grasper | –679.5 | 6.24 | .001 |
| Cottonoid | –684.6 | 6.18 | .001 |
| Muscle | –466.2 | 1.99 | .02 |
| Suction | –403.5 | 1.71 | .03 |
| String | –259.3 | 1.37 | .05 |
Figure 3. Instrument Detection Results
A total of 138 instruments were identified (true-positive results), 6 noninstruments were identified (false-positive results), and 41 instruments were missed (false-negative results).