| Literature DB >> 33759794 |
Morgane Casanova1, Anne Clavreul2,3, Gwénaëlle Soulard2,3, Matthieu Delion2, Ghislaine Aubin2, Aram Ter Minassian4, Renaud Seguier1, Philippe Menei2,3.
Abstract
BACKGROUND: Language mapping during awake brain surgery is currently a standard procedure. However, mapping is rarely performed for other cognitive functions that are important for social interaction, such as visuospatial cognition and nonverbal language, including facial expressions and eye gaze. The main reason for this omission is the lack of tasks that are fully compatible with the restrictive environment of an operating room and awake brain surgery procedures.Entities:
Keywords: awake surgery; brain mapping; eye tracking; mobile phone; nonverbal language; virtual reality; visuospatial cognition
Year: 2021 PMID: 33759794 PMCID: PMC8074984 DOI: 10.2196/24373
Source DB: PubMed Journal: J Med Internet Res ISSN: 1438-8871 Impact factor: 5.428
Figure 1(A) Patient wearing the virtual reality headset. (B) and (C) Example of the item “phone” in the DO 80 naming task presented in 2D (B) and 3D (C) with the virtual reality headset. The green spot indicates the patient’s gaze.
Figure 2Left: view of the operating room during the procedure. (A) Head of the patient wearing the virtual reality headset; (B) application of direct electrical stimulation to the exposed brain; (C) screen showing what the patient sees in the virtual reality headset, his gaze materialized by a green spot; (D) neuronavigational system showing brain white matter fascicles and the position of the electrode. Right: example of a layout after the virtual reality task simulating a visuospatial and social experience. (E) The image that is visualized and analyzed on the screen (C). The movement of the patient’s gaze is visualized as a blue line (with the starting point in green and the endpoint in pink). The green box indicates the avatar making eye contact. The white arrow indicates the avatar on which the patient focuses for more than 0.6 seconds (triggering the expression of a dynamic facial emotion). In this example, the patient identified the avatar making eye contact in 2.53 seconds and indicated the emotion expressed 3.77 seconds later.
Baseline characteristics of the 15 patients and the virtual reality tasks they performed.
| Patient | Sex | Age (years) | Handedness | Diagnosis | Hemisphere | Lobe | Preoperative training | Brain mapping |
| 1 | Male | 68 | Left | Metastasis | Left | Parietal | Task 1a and task 2b | Task 1 and task 2 |
| 2 | Male | 41 | Right | Oligodendroglioma II | Right | Frontal | Task 1 and task 2 | Motor and task 2 |
| 3 | Female | 25 | Right | Astrocytoma III | Left | Frontal | Task 1 | Motor and task 1 |
| 4 | Female | 66 | Right | Oligodendroglioma III | Right | Frontal | Task 2 | Motor |
| 5 | Male | 39 | Left | Astrocytoma III | Right | Frontal | Task 1 and task 2 | Motor and task 1 and task 2 |
| 6 | Female | 60 | Right | Glioblastoma | Left | Temporoparietal | Task 1 | Task 1 |
| 7 | Male | 48 | Right | Oligodendroglioma III | Left | Frontal | Task 1 and task 2 | Task 1 and task 2 |
| 8 | Female | 53 | Right | Glioblastoma | Left | Parietal | Task 1 | Task 1 |
| 9 | Male | 68 | Right | Glioblastoma | Left | Frontal | Task 1 and task 2 | Task 1 |
| 10 | Male | 73 | Right | Metastasis | Left | Frontal | Task 1 and task 2 | Task 1 and task 2 |
| 11 | Female | 47 | Right | Astrocytoma III | Left | Parietal | Task 1 and task 2 | Motor and task 1 |
| 12 | Male | 61 | Right | Metastasis | Left | Temporoparietal | Task 1 and task 2 | Task 1 |
| 13 | Male | 43 | Right | Astrocytoma III | Left | Parietal | Task 1 and task 2 | Task 1 and task 2 |
| 14 | Female | 53 | Right | Astrocytoma III | Left | Frontotemporal insular | Task 1 and task 2 | Task 1 |
| 15 | Female | 41 | Right | Astrocytoma III | Right | Frontal | Task 2 | Task 2 |
aTask 1: DO 80 (tablet, 2D virtual reality, and 3D virtual reality).
bTask 2: visuospatial and social virtual reality experience.