Abstract

Leader-follower autonomy (LFA) systems have so far only focused on vehicles following other vehicles. Though there have been several decades of research into this topic, there has not yet been any work on human-vehicle leader-follower systems in the known literature. We present a system in which an autonomous vehicle—our ACTor 1 platform—can follow a human leader who controls the vehicle through hand-and-body gestures. We successfully developed a modular pipeline that uses artificial intelligence/deep learning to recognize hand-and-body gestures from a user in view of the vehicle’s camera and translate those gestures into physical action by the vehicle. We demonstrate our work using our ACTor 1 platform, a modified Polaris Gem 2. Results show that our modular pipeline design reliably recognizes human body language and translates the body language into LFA commands in real time. This work has numerous applications such as material transport in industrial contexts.

Highlights

  • Studies of Leader-follower autonomy (LFA) systems include the development of mathematical models [1–5], testing in simulations [1–8], and live experiments [4–6,8–10] with both two-robot and multi-robot systems

  • More recent solutions are based on convolutional neural networks (CNNs), a machine-learning mechanism commonly used for image recognition tasks

  • We demonstrate the practical application of deep-learning based gesture recognition as a control mechanism for human-vehicle LFA as a more natural and versatile alternative to traditional fiducial markers

Read more

Summary

Leader-Follower Background

Leader-follower autonomy (LFA) systems, whereby one or more autonomous vehicles can follow other vehicles without the need for a human operator, is a field that has seen continuous development over the last several decades. Studies of LFA systems include the development of mathematical models [1–5], testing in simulations [1–8], and live experiments [4–6,8–10] with both two-robot and multi-robot systems. Demonstrations of LFA systems have been conducted with real applications in mind on land, air, and sea [6,10]. Despite the work done in vehicle-vehicle (VV) systems, academic LFA development has largely ignored human-vehicle (HV) systems on large-scale autonomous vehicles. Though there have been studies on human-robot following [11,12] as well as application in the commercial sector [13], these studies are more concerned with personal interactions with smaller robots rather than work with medium or large-sized vehicles

Gesture Recognition Background
Previous and Novel Work
Gestures
Neural Network Development
Building a Convolutional Neural Network
Modular Pipeline Design
ROS Fundamentals
ROS Node Design
Velodyne Nodelet Manager
Experiment and Results
Person Behind User 1 Person Behind User
Summary
Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call