Numerous physical objects in our daily lives are grouped or ranked according to a stereotyped presentation style. For example, in a library, books are typically grouped and ranked based on classification numbers. However, for better comparison, we often need to re-group or re-rank the books using additional attributes such as ratings, publishers, comments, publication years, keywords, prices, etc., or a combination of these factors. In this article, we propose a novel mobile DR/MR-based application framework named DRCmpVis to achieve in-context multi-attribute comparisons of physical objects with text labels or textual information. The physical objects are scanned in the real world using mobile cameras. All scanned objects are then segmented and labeled by a convolutional neural network and replaced (diminished) by their virtual avatars in a DR environment. We formulate three visual comparison strategies, including filtering, re-grouping, and re-ranking, which can be intuitively, flexibly, and seamlessly performed on their avatars. This approach avoids breaking the original layouts of the physical objects. The computation resources in virtual space can be fully utilized to support efficient object searching and multi-attribute visual comparisons. We demonstrate the usability, expressiveness, and efficiency of DRCmpVis through a user study, NASA TLX assessment, quantitative evaluation, and case studies involving different scenarios.
Read full abstract