Abstract
Nowadays, the current market trend is oriented toward increasing mass customization, meaning that modern production systems have to be able to be flexible but also highly productive. This is due to the fact that we are still living in the so-called Industry 4.0, with its cornerstone of high-productivity systems. However, there is also a migration toward Industry 5.0 that includes the human-centered design of the workplace as one of its principles. This means that the operators have to be put in the center of the design techniques in order to maximize their wellness. Among the wide set of new technologies, collaborative robots (cobots) represent one such technology that modern production systems are trying to integrate, because of their characteristic of working directly with the human operators, allowing for a mix of the flexibility of the manual systems with the productivity of the automated ones. This paper focuses on the impact that these technologies have on different levels within a production plant and on the improvement of the collaborative experience. At the workstation level, the control methodologies are investigated and developed: technologies such as computer vision and augmented reality can be applied to aid and guide the activities of the cobot, in order to obtain the following results. The first is an increase in the overall productivity generated by the reduction of idle times and safety stops and the minimization of the effort required to the operator during the work. This can be achieved through a multiobjective task allocation which aims to simultaneoulsy minimize the makespan, for productivity requirements, and the operator’s energy expenditure and mental workload, for wellness requirements. The second is a safe, human-centered, workspace in which collisions can be avoided in real time. This can be achieved by using real-time multicamera systems and skeleton tracking to constantly know where the operator is in the work cell. The system will offer the possibility of directing feedback based on the discrepancies between the physical world and the virtual models in order to dynamically reallocate the tasks to the resources if the requirements are not satisfied anymore. This allows the application of the technology to sectors that require constant process control, improving also the human–robot interaction: the human operator and the cobot are not merely two single resources working in the same cell, but they can achieve a real human–robot collaboration. In this paper, a framework is preented that allows us to reach the different aforementioned goals.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.