| Literature DB >> 35296714 |
Juan Antonio Barragan1, Jing Yang1, Denny Yu1,2, Juan P Wachs3,4.
Abstract
Adoption of robotic-assisted surgery has steadily increased as it improves the surgeon's dexterity and visualization. Despite these advantages, the success of a robotic procedure is highly dependent on the availability of a proficient surgical assistant that can collaborate with the surgeon. With the introduction of novel medical devices, the surgeon has taken over some of the surgical assistant's tasks to increase their independence. This, however, has also resulted in surgeons experiencing higher levels of cognitive demands that can lead to reduced performance. In this work, we proposed a neurotechnology-based semi-autonomous assistant to release the main surgeon of the additional cognitive demands of a critical support task: blood suction. To create a more synergistic collaboration between the surgeon and the robotic assistant, a real-time cognitive workload assessment system based on EEG signals and eye-tracking was introduced. A computational experiment demonstrates that cognitive workload can be effectively detected with an 80% accuracy. Then, we show how the surgical performance can be improved by using the neurotechnological autonomous assistant as a close feedback loop to prevent states of high cognitive demands. Our findings highlight the potential of utilizing real-time cognitive workload assessments to improve the collaboration between an autonomous algorithm and the surgeon.Entities:
Mesh:
Year: 2022 PMID: 35296714 PMCID: PMC8927583 DOI: 10.1038/s41598-022-08063-w
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1(a) Correlation between accuracy and sequence length in dataset 1. The best performing model is our proposed recurrent architecture based on LSTM cells. (b) Correlation between accuracy and sequence length for dataset 2. The best performing was a feed-forward neural network.
Accuracy analysis for multiple models and sequence lengths. (a) Accuracy analysis for dataset 1. (b) Accuracy analysis for dataset 2.
Highest accuracy values are given in bold.
Figure 2Mean scalp topography plot on the condition of high cognitive load for all the users with baseline subtraction. The baseline for all the channels was calculated with the data from the low cognitive state. Red areas represent increased oscillation activity in the condition of high cognitive workload while blue areas represent inhibition of the spectral activity.
Objective performance metrics results. Statistically significant results () were highlighted in bold.
| Type | Name | Mean (std), N=10 | T-test | ||
|---|---|---|---|---|---|
| Autonomy | Manual | T-statistic | p-value | ||
| Time | Clutching time (s) | 2.96 (6.51) | 14.84 (11.33) | − 3.211 | |
| completion time (s) | 334.11 (141.61) | 486.19 (210.36) | − 4.480 | ||
| Collaboration | Concurrent activity (%) | 0.23 (0.07) | 0 (0) | 9.907 | |
| Psm1 idle time (%) | 0.21 (0.08) | 0.48 (0.09) | − 9.139 | ||
| Psm2 idle time (%) | 0.37 (0.15) | 0.55 (0.13) | − 9.014 | ||
| Psm3 idle time (%) | 0.7 (0.08) | 0.82 (0.05) | − 3.047 | 0.014 | |
| Motion | Psm1 velocity (cm/s) | 0.98 (0.24) | 0.72 (0.2) | 4.776 | |
| Psm2 velocity (cm/s) | 0.73 (0.17) | 0.56 (0.19) | 6.853 | ||
| Psm3 velocity (cm/s) | 0.65 (0.17) | 0.37 (0.09) | 3.977 | ||
| Events | Tool changing events | 0.8 (1.23) | 11.2 (8.57) | − 4.221 | |
| Clutching events | 0 (0) | 3.7 (2.75) | − 4.254 | ||
| Blood | Percentage blood (%) | 0.14 (0.04) | 0.13 (0.05) | 0.593 | 0.568 |
Nasa-TLX results and measured cognitive workload. Statistically significant results () were highlighted in bold. The NASA-TLX is a 10-point Likert scale questionnaire that divides the workload demands into 6 components: effort, frustration, mental demand, performance, physical demand, and temporal demand.
| Type | Component | Mean (std), N=10 | T-test | ||
|---|---|---|---|---|---|
| Autonomy | Manual | t-statistic | p-value | ||
| Nasa-TLX | Effort | 4.25 (2.81) | 6.05 (2.44) | − 5.125 | |
| Frustration | 3.40 (2.09) | 5.95 (2.09) | − 5.517 | ||
| Mental demand | 4.30 (2.75) | 6.45 (2.20) | − 5.018 | ||
| Performance | 2.65 (2.27) | 3.90 (2.61) | − 3.926 | ||
| Physical Demand | 4.55 (2.66) | 6.15 (3.10) | − 3.320 | ||
| Temporal Demand | 3.85 (2.33) | 6.60 (1.96) | − 6.942 | ||
| WorkloadScore | 23.00 (12.41) | 35.10 (11.56) | − 6.181 | ||
| Workload prediction | Cognitive Index | 0.527 (0.254) | 0.585 (0.230) | − 1.666 | 0.14 |
Figure 3Diagram showing how the EEG and eye tracker signals are synchronized for the cognitive load detection system.
Figure 4Proposed fully convolutional network with a VGG-16 backbone. The architecture uses the following color coding: (1) green blocks represent convolutional layers, (2) orange blocks represent max-pooling layers, (3) blue blocks represent upsampling layers and (4) purple layers represent softmax layers.