| Literature DB >> 17993707 |
Catherine Plaisant1, Jean-Daniel Fekete, Georges Grinstein.
Abstract
Information Visualization (InfoVis) is now an accepted and growing field but questions remain about the best uses for and the maturity of novel visualizations. Usability studies and controlled experiments are helpful but generalization is difficult. We believe that the systematic development of benchmarks will facilitate the comparison of techniques and help identify their strengths under different conditions. We were involved in the organization and management of three information visualization contests for the 2003, 2004 and 2005 IEEE InfoVis Symposia, which requested teams to report on insights gained while exploring data. We give a summary of the state of the art of evaluation in information visualization, describe the three contests, summarize their results, discuss outcomes and lessons learned, and conjecture the future of visualization contests. All materials produced by the contests are archived in the InfoVis Benchmark Repository.Mesh:
Year: 2008 PMID: 17993707 DOI: 10.1109/TVCG.2007.70412
Source DB: PubMed Journal: IEEE Trans Vis Comput Graph ISSN: 1077-2626 Impact factor: 4.579