Abstract

One of the main visual analytics characteristics is the tight integration between automatic computations and interactive visualization. This generally corresponds to the availability of powerful algorithms that allow for manipulating the data under analysis, transforming it in order to feed suitable visualizations. This paper focuses on more general purpose automatic computations and presents a methodological framework that can improve the quality of the visualizations adopted in the analytical process, using the dataset at hand and the actual visualization. In particular, the paper deals with the critical issue of visual clutter reduction, presenting a general strategy for analyzing and reducing it through random data sampling. The basic idea is to model the visualization in a virtual space in order to analyze both clutter and data features (e.g., absolute density, relative density, etc.). In this way we can measure the visual overlapping which may likely affects a visualization while representing a large dataset, obtaining precise visual quality metrics about the visualization degradation and devising automatic sampling strategies in order to improve the overall image quality. Metrics and algorithms have been tuned taking into account the results of suitable user studies. We will describe our proposal using two running case studies, one on 2D scatterplots and the other one on parallel coordinates.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.