| Literature DB >> 34381099 |
Kun Qian1, Tomoki Arichi2,3, Anthony Price2, Sofia Dall'Orso4, Jonathan Eden3, Yohan Noh5, Kawal Rhode2, Etienne Burdet3, Mark Neil6, A David Edwards2, Joseph V Hajnal7.
Abstract
Patients undergoing Magnetic Resonance Imaging (MRI) often experience anxiety and sometimes distress prior to and during scanning. Here a full MRI compatible virtual reality (VR) system is described and tested with the aim of creating a radically different experience. Potential benefits could accrue from the strong sense of immersion that can be created with VR, which could create sense experiences designed to avoid the perception of being enclosed and could also provide new modes of diversion and interaction that could make even lengthy MRI examinations much less challenging. Most current VR systems rely on head mounted displays combined with head motion tracking to achieve and maintain a visceral sense of a tangible virtual world, but this technology and approach encourages physical motion, which would be unacceptable and could be physically incompatible for MRI. The proposed VR system uses gaze tracking to control and interact with a virtual world. MRI compatible cameras are used to allow real time eye tracking and robust gaze tracking is achieved through an adaptive calibration strategy in which each successive VR interaction initiated by the subject updates the gaze estimation model. A dedicated VR framework has been developed including a rich virtual world and gaze-controlled game content. To aid in achieving immersive experiences physical sensations, including noise, vibration and proprioception associated with patient table movements, have been made congruent with the presented virtual scene. A live video link allows subject-carer interaction, projecting a supportive presence into the virtual world.Entities:
Mesh:
Year: 2021 PMID: 34381099 PMCID: PMC8357830 DOI: 10.1038/s41598-021-95634-y
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1System schematic overview. Illustration of hardware system setup in the scanner and console rooms. (a) The console room setup includes the workstation running the VR system, eye and table tracking modules and carer communication system. (b) The in-scanner hardware mainly consists of a MR compatible projector, custom designed VR headset and patient table tracking camera. (c) Views of the system in scanner and headset design.
Figure 3Illustration of the system pipeline: the eye tracking stage tracks the eye corner and pupil location for each eye. The gaze estimation module uses the eye tracking results to build an initial regression model which is then updated by interactive calibration. The gaze information is fed into the VR system to trigger interactions. Interactive progressive calibration is executed in the background when interactions are triggered. This mechanism can progressively improve the gaze regression model accuracy without changing the user experience.
Figure 7Effect of head movement on detected pupil locations when fixating on an unchanging grid of 15 points shown in Fig. 6a: (a) Screenshots showing eye positions in each of 8 head poses. (b) The pupil positions (solid circles) and eye corner locations (solid diamond) for each of the poses in (a). Each color group (15 pupil points and 1 median corner point) are the measured pixel values for correspondingly numbered pose. The input eye image is 640 480 pixels. Calibration is based on the first pose.
Figure 2The mechanism of the gaze-based interface and illustration of interactive objects in VR system. The interface converts the estimated gaze coordinates on the display screen into a 3D ray to perform target collision detection. Interaction in the virtual world is based on dwell time and target components.
Figure 4Illustration of patient table tracking: (a) the original frame and (b) masked grayscale frame. (c,d) table motion vectors derived from optical flow at sampled pixels in the masked region when patient table moves.
Figure 5System timeline and content overview: (a) opening scene with frame of reference that moves to match what the subject perceives as the patient table is moved into the scanner. (b) Once the subject is inside the bore, the eye tracking calibration begins. (c) A familiarization scene follows to enhance the gaze tracking by progressive calibration. (d) The subject is guided into a VR lobby to select movies (e) and games (f) using gaze-based selection.
Figure 6Illustration of the viewing screen shape, projection region and calibration point patterns for experiments. The green circles are targets used for calibration. (a) Calibration pattern for regression model comparison. (b) Pattern used for initial system calibration and locations of targets used for accuracy analysis (one target shown at the centre of each rectangle within display region).
Figure 8Prediction and regression errors for different models based on measurements from 7 different poses. In each graph the horizontal axis represents the sequentially numbered target fixations and the vertical axis shows the fractional regression and prediction errors as percentages of the full screen width. The mean over the full trial for regression and prediction errors for each model are shown under each graph. The smallest mean regression error is obtained with largest model, but the smallest prediction error is obtained for the model that is quadratic in pupil coordinates and linear in eye corner coordinates.
Figure 9Gaze system performance: (a) the individual performance graph showing the mean absolute error (MAE) of prediction and regression of all targets for each subject. (b) The cumulative distribution for prediction errors for all targets for all subjects (progressive calibration in orange, fixed model in blue). (c) Cumulative distribution of the head displacement since previous fixation for all subjects.
Figure 10Provocation test results: (a) prediction errors for the fixed model (blue) and progressive model (orange). (b) Absolute head displacement from the pose at the first fixation event. (c) Relative head displacement since previous fixation. Note that a really large sudden movement (large vertical spike in (c) produces a corresponding larger prediction error in (a), but the system accuracy rapidly recovers again.
Comparison with state of the art methods.
| Category | Method | Setup | Accuracy (in degree or pixels) | Head motion range | Limitation | Advantage | Potential for MR compatible VR |
|---|---|---|---|---|---|---|---|
| 2DRFB | Ours | 2 Cameras with 1 LED each | 0.76 | 30 mm | 1. Gaze interaction is required to maintain accuracy when large head motion happens. 2. Close eye image required | 1. Accurate for relative fixed head motion. 2. No need for corneal reflection (robust to head motion) | High |
| 2DRFB | Su et al.[ | 1 Camera | 77.46 pixels | Unconstrained | 1. Poor accuracy for large head motion | 1. Fair accuracy for fixed head | Low |
| 2DRFB | Arar et al.[ | 1–2 Cameras + light array. | 0.91 | Constrained | 1. Corneal reflection required 2.Vulnerable to head motion | 1.Accurate for fixed head | Low |
| 2DRFB | Chi et al.[ | 1 Camera + light array | 20.00 pixels | Constrained | 1. Corneal reflection required 2.Vulnerable to head motion | 1. Accurate for fixed head | Low |
| AB | Wood et al.[ | 1 Camera | 4.41 | Free | 1. Poor accuracy. 2. Rely on dataset. | 1. Calibration free. 2. Low hardware requirement | Low |
| AB | Kim et al.[ | Near eye camera + infrared light | 3.51 | Fixed | 1.Poor accuracy. 2. Rely on dataset. | 1. Calibration free. 2. Low hardware requirement | Low |
| 3DMB | Sigut et al.[ | 1–4 Cameras + light array | 0.60 | Constrained | 1. Complex setup. 2. Corneal reflection required. | 1. Accurate and head motion allowed | Low |
| CRB | Coutinho et al.[ | 1 Camera + light array | 0.60 | Constrained | 1. Vulernable to user distance variance. 2. Corneal reflection required. 3. Complex setup | 1. Accurate and head motion allowed | Low |
| MRCETS | MRC Eye Tracking Solution (MRC)[ | Coil mount + 2 LEDs | 0.40 | Not mentioned | 1. Corneal reflection required. 2. Close eye image required | 1. Accurate for relative fixed head movement | Low |
| MRCETS | LiveTrack AV (Cambridge Research System)[ | Coil mount + 2 infrared lights | 0.50 | 20 mm | 1.Corneal reflection required. 2. Close eye image required | 1. Accurate for relative fixed head movement | Low |
| MRCETS | VisualSystem HD (NordicNeuroLab)[ | Near face coil mount + 2 infrared lights | 0.20 | Fixed | 1. Corneal reflection required. 2. Close eye image required | 1. Accurate for relative fixed head movement | Low |
| MRCETS | DeepVOG[ | Near face coil mount + 2 infrared lights | 0.50 | Fixed | 1. Corneal reflection required. 2. Close eye image required | 1. Accurate for relative fixed head movement | Low |
We categorize these methods into: 2DRFB = 2D Regression/Feature Based; AB = Appearance Based; 3D Model Based = 3DMB; Cross Ratio Based = CRB; MR Compatible Eye Tracking Software = MRCETS.