Abstract

Human-robot collaboration can be used to share workload to form semi-automated production systems. Assembly operations are recognized with high potential to increase productivity by using the best skills of humans and robots in a combination. Components and parts to be assembled need to be structured and presented to the robot in a known location and orientation. The process of presenting parts to the robot for assembly tasks is referred to as parts feeding. Feeding system needs to be adaptable to dynamics of parts’ design, shape, location, and orientation to have flexibility in the production. The traditional automation methods for parts feeding are part-specific mechanical devices e.g. vibratory bowl feeders which are inflexible towards part variations. This comes as a hindrance in getting maximum advantage of the flexibility potential of human-robot collaboration in assembly. The recent years have seen advances in machine vision and has potential for feeding applications. This paper explores the developments in machine-vision for flexible feeding systems for human-robot assembly cells. A specification model is presented to develop a vision-guided flexible feeding system. Various vision-based feeding techniques are discussed and validated through an industrial case study. The results helped to compare the efficiency of each feeding technique for industrial application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call