Abstract

This paper presents a methodology to detect an object with an accelerometer potentially among many other moving objects in a camera scene. By matching sensor readings from a wearable accelerometer with analogous readings from a single camera or plurality of cameras, we detect instances of the same physical movement that both modalities capture. This has a wide range of potential applications in the cyber-physical systems domain such as identification, localization, and detecting context for activity recognition. We present an approach to project data from camera frames into accelerometer frames, where they share the same physical representation, allowing for comparing and determining similarities between the two modalities by using computational algorithms in the cyber world. This is challenging as depth is unknown when using a single 2D camera. We translate camera measurements into the acceleration physical domain and acquire an estimated depth, when the depth is not varying significantly during the motion. We model this translation as an optimization problem to find the optimal depth that maximizes the similarity between readings of the camera and accelerometer. Additionally, we discuss a potential solution with multiple cameras that works for arbitrary varying depth motions. Experimental results demonstrate that the system can detect matching between data stemming from physical movements observed by a wearable accelerometer and a single camera or plurality of cameras.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call