The traditional wheelchair focuses on the “human-chair” motor function interaction to ensure the elderly and people with disabilities’ basic travel. For people with visual, hearing, physical disabilities, etc., the current wheelchairs show shortcomings in terms of accessibility and independent travel for this group. Therefore, this paper develops an intelligent wheelchair with multimodal human–computer interaction and autonomous navigation technology. Firstly, it researches the multimodal human–computer interaction technology of occupant gesture recognition, speech recognition, and head posture recognition and proposes a wheelchair control method of three-dimensional head posture mapping the two-dimensional plane. After testing, the average accuracy of the gesture, head posture and voice control modes of the motorized wheelchair proposed in this study reaches more than 95 percent. Secondly, the LiDAR-based smart wheelchair indoor autonomous navigation technology is investigated to realize the autonomous navigation of the wheelchair by constructing an environment map, using A* and DWA algorithms for global and local path planning, and adaptive Monte Carlo simulation algorithms for real-time localization. Experiments show that the position error of the wheelchair is within 10 cm, and the heading angle error is less than 5° during the autonomous navigation. The multimode human–computer interaction and assisted driving technology proposed in this study can partially compensate and replace the functional deficiencies of the disabled population and improve the quality of life of the elderly and disabled population.
Read full abstract