Abstract

This paper reports on the system design for integrating the various processes needed for end-to-end implementation of a smart assistive robotic manipulator. Specifically, progress is reported in the empowerment of the UCF-MANUS system with a suite of sensory, computational, and multimodal interface capabilities so that its autonomy can be made accessible to users with a wide range of disabilities. Laboratory experiments are reported to demonstrate the ability of the system prototype to successfully and efficiently complete object retrieval tasks. Benchmarking of the impact of the various interface modalities on user performance is performed via empirical studies with healthy subjects operating the robot in a simulated instrumental activities of daily living tasks setup. It is seen through a analysis of the collected quantitative data that the prototype is interface neutral and shows robustness to variations in the tasks and the environment. It is also seen that the prototype autonomous system is quantitatively superior to Cartesian control for all tested tasks under a “number of commands” metric, however, under a “time to task completion” metric, the system is seen to be superior for “hard” tasks but not for “easy” tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call