Abstract

In this article, we present a control architecture for a robotic manipulator finally aimed at helping people with severe motion disabilities in performing daily life operations, such as manipulating objects or drinking. The proposed solution allows the user to focus the attention only on the operational tasks, while all the safety-related issues are automatically handled by the developed control architecture. The user commands the manipulator sending high-level commands via a P300-based brain–computer interface. A perception module, relying on an RGB-D sensor, continuously detects and localizes the objects in the scene, tracking the position of the user and monitoring the environment for identifying static and dynamic obstacles, e.g., a person entering in the scene. A lightweight manipulator is controlled relying on a task-priority inverse kinematics algorithm that handles task hierarchies composed of equality-based and set-based tasks, including obstacle avoidance and joint mechanical limits. This article describes the overall architecture and the integration of the implemented software modules, that are based on common frameworks and software libraries, such as the robotic operating system (ROS), BCI2000, OpenCV, and PCL. The experimental results on a use case scenario using a Kinova 7DOFs Jaco2 robot helping a user to perform drinking and manipulation tasks show the effectiveness of the developed control architecture.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call