Abstract

Recognition ability and, more broadly, machine learning techniques enable robots to perform complex tasks and allow them to function in diverse situations. In fact, robots can easily access an abundance of sensor data that are recorded in real time such as speech, image, and video. Since such data are time sensitive, processing them in real time is a necessity. Moreover, machine learning techniques are known to be computationally intensive and resource hungry. As a result, an individual resource-constrained robot, in terms of computation power and energy supply, is often unable to handle such heavy real-time computations alone. To overcome this obstacle, we propose a framework to harvest the aggregated computational power of several low-power robots for enabling efficient, dynamic, and real-time recognition. Our method adapts to the availability of computing devices at runtime and adjusts to the inherit dynamics of the network. Our framework can be applied to any distributed robot system. To demonstrate, with several Raspberry-Pi3-based robots (up to 12) each equipped with a camera, we implement a state-of-the-art action recognition model for videos and two recognition models for images. Our approach allows a group of multiple low-power robots to obtain a similar performance (in terms of the number of images or video frames processed per second) compared to a high-end embedded platform, Nvidia Tegra TX2.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call