Sasan Matinfar1, M Ali Nasseri2,3, Ulrich Eck2, Michael Kowalsky2, Hessam Roodaki2, Navid Navab4, Chris P Lohmann3, Mathias Maier3, Nassir Navab2,5. 1. Computer Aided Medical Procedures, Technische Universität München, Munich, Germany. sasan.matinfar@campus.lmu.de.com. 2. Computer Aided Medical Procedures, Technische Universität München, Munich, Germany. 3. Augenklinik rechts der Isar, Technische Universität München, Munich, Germany. 4. Topological Media Lab, Concordia University, Montreal, Canada. 5. Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.
Abstract
PURPOSE: Advances in sensing and digitalization enable us to acquire and present various heterogeneous datasets to enhance clinical decisions. Visual feedback is the dominant way of conveying such information. However, environments rich with many sources of information all presented through the same channel pose the risk of over stimulation and missing crucial information. The augmentation of the cognitive field by additional perceptual modalities such as sound is a workaround to this problem. A major challenge in auditory augmentation is the automatic generation of pleasant and ergonomic audio in complex routines, as opposed to overly simplistic feedback, to avoid alarm fatigue. METHODS: In this work, without loss of generality to other procedures, we propose a method for aural augmentation of medical procedures via automatic modification of musical pieces. RESULTS: Evaluations of this concept regarding recognizability of the conveyed information along with qualitative aesthetics show the potential of our method. CONCLUSION: In this paper, we proposed a novel sonification method for automatic musical augmentation of tasks within surgical procedures. Our experimental results suggest that these augmentations are aesthetically pleasing and have the potential to successfully convey useful information. This work opens a path for advanced sonification techniques in the operating room, in order to complement traditional visual displays and convey information more efficiently.
PURPOSE: Advances in sensing and digitalization enable us to acquire and present various heterogeneous datasets to enhance clinical decisions. Visual feedback is the dominant way of conveying such information. However, environments rich with many sources of information all presented through the same channel pose the risk of over stimulation and missing crucial information. The augmentation of the cognitive field by additional perceptual modalities such as sound is a workaround to this problem. A major challenge in auditory augmentation is the automatic generation of pleasant and ergonomic audio in complex routines, as opposed to overly simplistic feedback, to avoid alarm fatigue. METHODS: In this work, without loss of generality to other procedures, we propose a method for aural augmentation of medical procedures via automatic modification of musical pieces. RESULTS: Evaluations of this concept regarding recognizability of the conveyed information along with qualitative aesthetics show the potential of our method. CONCLUSION: In this paper, we proposed a novel sonification method for automatic musical augmentation of tasks within surgical procedures. Our experimental results suggest that these augmentations are aesthetically pleasing and have the potential to successfully convey useful information. This work opens a path for advanced sonification techniques in the operating room, in order to complement traditional visual displays and convey information more efficiently.
Authors: Jeremy Bluteau; Marie-Dominique Dubois; Sabine Coquillart; Edouard Gentaz; Yohan Payan Journal: Annu Int Conf IEEE Eng Med Biol Soc Date: 2010
Authors: Benjamin J Dixon; Michael J Daly; Harley Chan; Allan Vescan; Ian J Witterick; Jonathan C Irish Journal: Laryngoscope Date: 2013-10-04 Impact factor: 3.325
Authors: Christian Hansen; David Black; Christoph Lange; Fabian Rieber; Wolfram Lamadé; Marcello Donati; Karl J Oldhafer; Horst K Hahn Journal: Int J Med Robot Date: 2012-11-28 Impact factor: 2.547
Authors: Eduard H J Voormolen; Peter A Woerdeman; Marijn van Stralen; Herke Jan Noordmans; Max A Viergever; Luca Regli; Jan Willem Berkelbach van der Sprenkel Journal: PLoS One Date: 2012-07-25 Impact factor: 3.240