Abstract
An automatic “museum audio guide” is presented as a new type of audio guide for museums. The device consists of a headset equipped with a camera that captures exhibit pictures and the eyes of things computer vision device (EoT). The EoT board is capable of recognizing artworks using features from accelerated segment test (FAST) keypoints and a random forest classifier, and is able to be used for an entire day without the need to recharge the batteries. In addition, an application logic has been implemented, which allows for a special highly-efficient behavior upon recognition of the painting. Two different use case scenarios have been implemented. The main testing was performed with a piloting phase in a real world museum. Results show that the system keeps its promises regarding its main benefit, which is simplicity of use and the user’s preference of the proposed system over traditional audioguides.
Highlights
The paradigm of cyber-physical systems (CPSs) aims at a major intertwining of both the computing system and the physical world [1]
This platform is optimized to maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application. This means more hours of continuous operation, and the possibility of creating novel applications and services that were previously unfeasible due to size, cost, and power restrictions. We describe such an experience, in which eyes of things computer vision device (EoT) is used to develop a novel automatic “museum audio guide”, where an audio headset is equipped with the highly-efficient EoT board and a small camera
Myriad 2 is a heterogeneous multicore vision processing unit (VPU) composed of twelve propietary very long instruction word (VLIW) processors called SHAVEs, two 32-bit reduced instruction set computer (RISC) processors (LeonOS and LeonRT), and a hardware acceleration pipeline for computer vision tasks (Figure 2)
Summary
The paradigm of cyber-physical systems (CPSs) aims at a major intertwining of both the computing system and the physical world [1]. This has prevented widespread adoption and commercialization, which has been recently observed by some researchers [20]: “Given the fair number of proposed designs, it is somewhat surprising that a general-purpose embedded smart camera, based on an open architecture is difficult to find, and even more difficult to buy” Mobile devices such as smartphones and tablets are the closest current example to versatile mobile vision systems. The entire project was based on four fundamental pillars: Cost, power consumption, size, and programming flexibility [22,23] This platform is optimized to maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.