Abstract

The automation of the car charging process is motivated by the rapid development of technologies for self-driving cars and the increasing importance of ecological transportation units. Automation of this process requires the implementation of Computer Vision (CV) techniques. However, it remains challenging to precisely position the charger plug autonomously due to the sensitivity of CV algorithms to lighting and weather conditions. We introduce a novel robotic operation system based on hand gesture recognition through teleconferencing software. The users, connected by teleconference, use their hand gestures to teleoperate the electric plug located on the collaborative robot end-effector. We conducted a user study to evaluate the system performance and suitability using OmniCharger and two baseline interfaces (a UR10 Teach Pendant and a Logitech F710 Wireless Gamepad). Except for two trials, all the users were able to locate the plug inside of a 5 cm target using the interfaces. The distance to the target and the orientation error did not present statistically significant differences ( \(p=0.1099\,{\gt}\,0.05\) and \(p=0.0903\,{\gt}\,0.05\) , respectively) in the use of the three interfaces. The NASA TLX questionnaire results showed low values in all the sub-classes, the SUS results rated the usability of the proposed interface above average (68%), and the UEQ showed excellent performance of the OmniCharger interface in the attractiveness, stimulation, and novelty attributes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call