Abstract
ABSTRACT There is great potential in enabling users to interact with digital information by integrating it with everyday physical objects. However, developing these interfaces requires programmers to acquire and abstract physical input. This is difficult, is time-consuming, and requires a high level of technical expertise in fields very different from user interface development—especially in the case of computer vision. Based on structured interviews with researchers, a literature review, and our own experience building physical interfaces, we created Papier-Mâché, a toolkit for integrating physical and digital interactions. Its library supports computer vision, electronic tags, and barcodes. Papier-Mâché introduces high-level abstractions for working with these input technologies that facilitate technology portability. We evaluated this toolkit through a laboratory study and longitudinal use in course and research projects, finding the input abstractions, technology portability, and monitoring facilities to be highly effective.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.