Abstract

Information Visualization concerns the use of interactive computer graphics to enable people to obtain insight in large amounts of abstract data. To this end, data objects and their attributes have to be graphically encoded such that users can easily obtain such insight. Multivariate data are often presented by glyphs or icons, graphical symbols which visual features, such as color, shape, size, and position, encode data attributes. However, it is currently not clear how this should be done in an optimal way, and models for this are not available or incomplete, especially in the context of visual analytic tasks. Our research focuses on the development of quantitative user models for graphic encoding and aims to answer the following question: How can we quantitatively model the perception of visual encodings, in particular the relation between visual features and task performance for standard information visualization tasks, in order to obtain optimal visual encodings for these? To answer this question, we have decomposed our work into several steps with increased complexity and difficulty. In the first step, we set up a methodology to test and model the human perception process of correlation analysis for two different visualization methods: scatterplots and parallel-coordinate plots. Using statistic estimation enables us to compare both methods quantitatively. In the second step, we test and quantify the human perception of differences between glyphs. The more complicated process of visual analysis is modeled using the methodology developed in the previous step. This leads to the construction of a perceptually uniform space for glyphs, where glyph location indicates a specific configuration. Distances between glyphs represent the discriminability between them and is derived from consequent user performances in visual analytic tasks, such as finding outliers and patterns. In the third step, mapping schemes are obtained between the physical metrics of glyphs varying in a single feature and the perceived scale of glyphs portrayed in the space. Different mapping schemes from psychophysical theories are considered, compared and adapted. A model is derived where user variation and a general human perception part are modeled separately. We test on size and lightness separately. For both, nonlinear mapping functions are obtained and interpreted. In the fourth step, a more complex situation is considered when two visual features of glyphs are varied simultaneously. A generic mapping scheme of human perception is achieved for the interaction between size and lightness. In the last step, visual encoding schemes are generated by using our space model and mapping mechanism. An optimization algorithm is used to compute configurations where the glyphs are optimally discriminable. We show different use cases for which the model can be applied. The results of this research can be used to improve the effectiveness and efficiency of visual analysis performed by users. Also, the generic approach that has been developed can be used to model other graphical encoding schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call