| Literature DB >> 20224140 |
Emmanuel Pietriga1, Olivier Bau, Caroline Appert.
Abstract
Focus+context interaction techniques based on the metaphor of lenses are used to navigate and interact with objects in large information spaces. They provide in-place magnification of a region of the display without requiring users to zoom into the representation and consequently lose context. In order to avoid occlusion of its immediate surroundings, the magnified region is often integrated in the context using smooth transitions based on spatial distortion. Such lenses have been developed for various types of representations using techniques often tightly coupled with the underlying graphics framework. We describe a representation-independent solution that can be implemented with minimal effort in different graphics frameworks, ranging from 3D graphics to rich multiscale 2D graphics combining text, bitmaps, and vector graphics. Our solution is not limited to spatial distortion and provides a unified model that makes it possible to define new focus+context interaction techniques based on lenses whose transition is defined by a combination of dynamic displacement and compositing functions. We present the results of a series of user evaluations that show that one such new lens, the speed-coupled blending lens, significantly outperforms all others.Mesh:
Year: 2010 PMID: 20224140 DOI: 10.1109/TVCG.2009.98
Source DB: PubMed Journal: IEEE Trans Vis Comput Graph ISSN: 1077-2626 Impact factor: 4.579