Abstract

To achieve the better navigation performance of a mobile robot in the unknown environments, a novel human–robot hybrid system incorporating a motor-imagery (MI)-based brain teleoperation control is presented in this paper, where a deep-learning-based active perception is developed in the simultaneous localization and mapping (SLAM) framework. Using the deep-learning-based object recognition in the red–green–blue-depth (RGB-D) data acquisition process, the designed SLAM approach can select the valid feature points effectively, and the speed of displacement tracking can be improved by combining the oriented FAST and rotated BRIEF (ORB) SLAM algorithm with the optical flow method. The global trajectory map can also be mended using graph-based nonlinear error optimization. In addition, to build the connection between human intentions and the robot control commands flexibly in the developed mobile robot, a common spatial pattern (CSP)-based support vector machine (SVM) classification algorithm is proposed so that the control commands can be obtained directly from the human electroencephalograph (EEG) signals, which are preanalyzed and classified using the phenomena of event-related synchronization/desynchronization (ERS/ERD). Experiments involving several operators have verified the effectiveness of the proposed framework in the actual unstructured environments. Note to Practitioners —This paper is motivated by the exploration of the mobile robots remotely controlled by the disables in an unstructured environment that includes some unknown moving objects in the background. In the conventional approaches, the image feedback and the brain–computer interface-based EEG signals invoked by the visual stimulus are used to guide the motion of the robots. In this paper, the MI of left or right hand is adopted as an input of the brain teleoperation system, and the EEG signals are classified into two categories representing two robot control commands through the designed algorithm. Besides, a deep-learning-based SLAM is designed to analyze the image and depth of the environmental information provided by the robot RGB-D sensor, and the results can be transferred into a 3-D map locating the robot and assisting the operator to understand the operating environment. It has been verified that the deep-learning-based SLAM presented is more efficient and robust than traditional SLAM. The feasibility of the system is demonstrated by a set of experiments in the corridor environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call