Abstract

Problem statement: Wheelchairs have a widespread usage both as a rehabilitation tool and as an assistive device in the daily lives of people with disabilities. However, some disabilities make it difficult to use traditional wheelchairs that are operated manually or with a joystick. This paper describes an intelligent control system for wheelchair automatization, which allows the user to give commands by different means—voice, eye movements, muscle tension. The object recognition system and logical processing of commands supports a wide variety of interfaces and commands. To achieve these goals, a semiotic model of the world is used. Purpose of research: development of a control system for a robotic wheelchair that supports multimodal interfaces and has a high level of automation that enables efficient operation for users with various disabilities. Results: The paper describes the developed architecture of the control system based on the semiotic model of the world, modules for the speech interface, gaze control, and the interface based on myosensors. The navigation system and the processing module for the semiotic world model ensure the safe execution of user commands, including movement, object recognition and processing of commands that contain references to known objects. The system supports interfacing with a manipulator, which is controlled using a linguistic model: a language for describing actions representing the admissible movements of the manipulator in the form of a formal grammar. The study tested the proposed system on a developed model of the robot and a detailed model of the room in a Gazebo environment, as well as on the corresponding software and hardware implementation of the robotic wheelchair. Practical significance: Automation and the use of models and methods of artificial intelligence in the development of wheelchairs allows us to make them more versatile, which increases the number of users for whom a particular model is suitable, reduces the strain on the operator of the wheelchair and expands the capabilities of the wheelchair. The proposed usage of the sign models achieves these goals by combining logical processing of commands as a means of handling multimodal interfaces and executing complex commands.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.