Abstract

We introduce a vision-based human–computer interaction system which collaborates with a user by providing feedback during user activities. We design an intelligent workspace system that analyzes the context of a user’s tasks and generates appropriate feedback guiding the user to complete the tasks correctly. While a user is performing the high-level activity, our system analyzes what sub-events the user has already completed and what sub-events are needed next in order for the user to finish the activity. Based on this analysis, the system generates hierarchical feedback messages to assist the user. We test our system on three different types of tasks: tightening nuts on a wheel of a car, assembling a toy spaceship with LEGO blocks, and assembling a laptop computer with its parts. Experimental results demonstrate the accuracy of the vision-based activity analysis, and a comparative user study shows that our system’s feedback enables users to complete assembly tasks with significantly improved efficiency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.