Abstract

Recent years have witnessed increasing attention from both academia and industry on contact-free acoustic sensing. Due to the pervasiveness of audio devices and fine granularity of acoustic sensing, it has been applied in numerous fields, including human-computer interaction and contact-free health sensing. Though promising, the limited working range hinders the wide adoption of acoustic sensing in real life. To break the sensing range limit, we propose to deploy the acoustic device on a moving platform (i.e., a robot) to support applications that require larger coverage and continuous sensing. In this paper, we propose SonicBot, a system that enables contact-free acoustic sensing under device motion. We propose a sequence of signal processing schemes to eliminate the impact of device motion and then obtain clean target movement information that is previously overwhelmed by device movement. We implement SonicBot using commercial audio devices and conduct extensive experiments to evaluate the performance of the proposed system. Experiment results show that our system can achieve a median error of 1.11 cm and 1.31 mm for coarse-grained and fine-grained tracking, respectively. To showcase the applicability of our proposed system in real-world settings, we perform two field studies, including coarse-grained gesture sensing and fine-grained respiration monitoring when the acoustic device moves along with a robot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call