| Literature DB >> 35647580 |
Berk Turhan1,2, Zeynep H Gümüş2,3.
Abstract
How we interact with computer graphics has not changed significantly from viewing 2D text and images on a flatscreen since their invention. Yet, recent advances in computing technology, internetworked devices and gaming are driving the design and development of new ideas in other modes of human-computer interfaces (HCIs). Virtual Reality (VR) technology uses computers and HCIs to create the feeling of immersion in a three-dimensional (3D) environment that contains interactive objects with a sense of spatial presence, where objects have a spatial location relative to, and independent of the users. While this virtual environment does not necessarily match the real world, by creating the illusion of reality, it helps users leverage the full range of human sensory capabilities. Similarly, Augmented Reality (AR), superimposes virtual images to the real world. Because humans learn the physical world through a gradual sensory familiarization, these immersive visualizations enable gaining familiarity with biological systems not realizable in the physical world (e.g., allosteric regulatory networks within a protein or biomolecular pathways inside a cell). As VR/AR interfaces are anticipated to be explosive in consumer markets, systems biologists will be more immersed into their world. Here we introduce a brief history of VR/AR, their current roles in systems biology, and advantages and disadvantages in augmenting user abilities. We next argue that in systems biology, VR/AR technologies will be most useful in visually exploring and communicating data; performing virtual experiments; and education/teaching. Finally, we discuss our perspective on future directions for VR/AR in systems biology.Entities:
Keywords: 3D; CAVE; augmented reality; immersive 3D; multi-omics visualization; systems biology; virtual reality; visualization design
Year: 2022 PMID: 35647580 PMCID: PMC9140045 DOI: 10.3389/fbinf.2022.873478
Source DB: PubMed Journal: Front Bioinform ISSN: 2673-7647
Input tracking and output display technologies in consumer-level VR/AR systems.
| INPUT TRACKERS |
|---|
| Track user position (x, y, z) coordinates and orientation (yaw, pitch, roll angles) as users move about (either all or some) |
| “User position” may be body movements, head-rotations, or gestures |
| Feed tracking information back to the computer for real-time display updates (e.g., 3D mouse (wand) or data gloves) |
|
|
|
|
| Generally use 3D computer vision, sync pulses and laser lines, or inertial measurement units to recognize movements, gestures and positioning to achieve intuitive tracking |
| |
| e.g., Microsoft Azure Kinect DK (Redmond, WA), Sony PlayStation Camera (San Mateo, CA), OptiTrack (Corvallis, OR), Intel RealSense Depth Cameras (Santa Clara, CA), OpenCV OAK (Palo Alto, CA), HTC Vive Tracker (Taiwan) |
| |
| e.g., Sony PlayStation Move (San Mateo, CA), Sony Dualsense Controller (San Mateo, CA), UltraLeap Trackers (United Kingdom) |
| |
| e.g., Tobii Face Trackers (Sweden), VIVE Facial Tracker (Taiwan) |
|
|
|
|
| |
| |
| |
| |
| |
|
|
| OUTPUT DISPLAYS |
|
|
| Display a three-dimensional virtual world (usually stereoscopic) from the user’s eye positions, to create the perception that the virtual scene is independent of user movements |
|
|
|
|
| |
| |
| |
| e.g., HTC Vive (Taiwan); Microsoft Hololens (Redmond, WA); Oculus Rift and Quest (Menlo Park, CA); Sony Playstation VR (San Mateo, CA); Valve Index (Bellevue, WA); HP Reverb G2 (Palo Alto, CA) |
| |
| |
| |
| |
| |
| e.g., Sony Spatial Reality Display (San Diego, CA), Acer ConceptD 7 SpatialLabs (Taiwan) |
| |
| e.g., iOS devices with ARKit (Cupertino, CA), Android devices with ARCore (Mountain View, CA) |
|
|
| HARDWARE AND SOFTWARE |
|
|
| Manage input/output; Analyze incoming data; Compute and render 3D-graphics based on input tracker feedback |
Advantages and Disadvantages of VR technologies.
| ADVANTAGES |
|---|
| |
| |
| |
| |
| |
| |
|
|
|
|
|
|
| |
| |
| |
| |