Abstract

Collaborative robots (cobots) are becoming more prominent in the manufacturing industry due to their ability to operate outside safety zones and work in tandem with humans to perform precise and repetitive tasks, such as visual inspection, product categorization, quality control, etc. Cobots generally have limited computational resources that limit their ability to perform complex machine learning (ML) tasks and as such, various cloud- and fog-based computational task offloading mechanisms have been proposed. However, a growing reluctance to share manufacturing data on the cloud, cybersecurity concerns, and demand of agile decision making is encouraging researchers to design resource-sharing frameworks for on-floor cobots where they can share the execution of complex ML tasks. However, agile on-floor environments and the potential presence of malicious elements make reliable task offloading a significant challenge. This article investigates reliability issues and their effects on executing complex ML task executed by participating on-floor cobots. Specifically, this article aims to answer <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">whom to offload?</i> with the objective of ensuring reliability, security, and data protection when offloading computation tasks. To this end, a reputation-based collaborative robotic learning (CoRoL) framework is proposed with the ability to isolate and/or minimize the impact of malicious or poor-performing cobots on computation task execution. In addition, CoRoL is supported by split learning for privacy-preserving task offloading with minimum data exchange. Simulation results and comparative analysis will demonstrate CoRoL’s efficiency in terms of percentage of completed tasks, achieved accuracy, and impact on energy consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call