Abstract

The recent development of autonomous vehicles has attracted much attention, but operating these vehicles may be too complex for average users. Therefore, we propose an intuitive, multimodal interface for the control of autonomous vehicles using speech and gesture recognition to interpret and execute the commands of users. For example, if the user says “turn there” while pointing at a landmark, the vehicle can utilize this behavior to correctly understand and comply with the user’s intent. To achieve this, we designed a two-part interface consisting of a multimodal understanding component and a dialog control component. Our multimodal understanding and dialog control components can be seen as a concatenation of two separate transducers. One transducer is used for multimodal understanding and the other for a conventional dialog system. We then construct a combined transducer from these two transducers. We developed various scenarios which might arise while operating an autonomous vehicle and displayed these...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call