Abstract

We evaluate an approach for mobile smart objects to cooperate with projector-camera systems to achieve interactive projected displays on their surfaces without changing their appearance or function. Smart objects describe their appearance directly to the projector-camera system, enabling vision-based detection based on their natural appearance. This detection is a significant challenge, as objects differ in appearance and appear at varying distances and orientations with respect to a tracking camera. We investigate four detection approaches representing different appearance cues and contribute three experimental studies analysing the impact on detection performance, firstly of scale and rotation, secondly the combination of multiple appearance cues and thirdly the use of context information from the smart object. We find that the training of appearance descriptions must coincide with the scale and orientations providing the best detection performance, that multiple cues provide a clear performance gain over a single cue and that context sensing masks distractions and clutter, further improving detection performance.KeywordsCooperative AugmentationSmart ObjectsVision-Based DetectionNatural AppearanceMulti-Cue Detection

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call