Abstract

This paper presents a generic design and implementation of a multi-modal operator interface (MOI) which has been dedicated for supervisory control systems. A presentation followed by an evaluation of some interactions and control modes, that have been so far integrated to built our experimental system, are given. These control modes are namely tele-manipulation, speech-based control, vision-based control, and model-based control. More emphasis is stressed on the vision-based control and tele-manipulation we have originally developed. Next, the combination of these control modes, through this MOI, is used to achieve some complex robotics tasks. Moreover, by incorporating some intelligent functions and fusing different sensors, the MOI enables a multi-modal interaction ranging from shared mode to superimposed mode, passing through traded mode. This entails a multi-level robot control and an adaptive automation. Robotics-oriented applications consisting of inserting a square (circular)-in-peg are described using tele-manipulation control mode and vision-based control. Importance of the multi-interaction mode versus unimodal mode is highlighted. Finally, improvement of the developed MOI system is investigated through incorporating an anticipatory system that protects against possible catastrophic errors of the operator.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.