Abstract

Small mobile robots can be useful in dangerous and high-risk applications such as disaster response. To this end, small robots must be capable of autonomous navigation. One way to perform autonomous navigation is via learning perception-action cycles. The ability to learn perception-action cycles may enable computationally- and data-efficient ways to transfer navigation policies between robots, and to generalize across operating environments. Learning perception-action cycles currently relies on deep networks. Such networks, however, may not be directly applicable to small robots due to the latter's constrained sensing and computing capacity. To mitigate this challenge, we identify minimalistic neural network architectures to approximate an obstacle prediction function using a robot's observation and action history. We propose a new learning-based algorithm for small robot navigation in partially-known, partially-observable environments. The performance of the algorithm and its ability to generalize are evaluated in simple and complex environments of varying size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call