Abstract

During the last decades, collaborative robots capable of operating out of their cages are widely used in industry to assist humans in mundane and harsh manufacturing tasks. Although such robots are inherently safe by design, they are commonly accompanied by external sensors and other cyber-physical systems, to facilitate close cooperation with humans, which frequently render the collaborative ecosystem unsafe and prone to hazards. We introduce a method that capitalizes on partially observable Markov decision processes (POMDP) to amalgamate nominal actions of the system along with unsafe control actions posed by the System Theoretic Process Analysis (STPA). A decision-making mechanism that constantly prompts the system into a safer state is realized by providing situation awareness about the safety levels of the collaborative ecosystem by associating the system safety awareness with specific groups of selected actions. POMDP compensates the partial observability and uncertainty of the current state of the collaborative environment and creates safety screening policies that tend to make decisions that balance the system from unsafe to safe states in real time during the operational phase. The theoretical framework is assessed on a simulated human–robot collaborative scenario and proved capable of identifying loss and success scenarios.

Highlights

  • The immense need of contemporary industries to meet the technological requirements of the factories of the future as imposed by Industry 4.0 brought significant technological breakthroughs in the robotics domain, leading the robots out of their cages to work in close collaboration with humans, aiming to increase productivity, flexibility and autonomy in production [1]

  • The safety community brought into action specific International Organization for Standardization (ISO) standards [7,8], custom tailored to identify a series of important hazards in human–robot collaboration (HRC) applications including potential sources of harm for the operator, their likely origins, and safety regulations for guiding the design and deployment of robotic solutions [9]

  • Our work aims to provide such a real time tool, which capitalizes on the determined unsafe control actions extracted from the System Theoretic Process Analysis (STPA), in order to prompt the system into the selection of an action, based on the current and past observations and actions, that will transit it to a state that belongs to a higher safety level

Read more

Summary

Introduction

The immense need of contemporary industries to meet the technological requirements of the factories of the future as imposed by Industry 4.0 brought significant technological breakthroughs in the robotics domain, leading the robots out of their cages to work in close collaboration with humans, aiming to increase productivity, flexibility and autonomy in production [1]. Research endeavors in safety analysis realized powerful tools that enable mainly hardware-related fault forecasting, which aims at estimating the cause–consequence chain of fault occurrence [12] Such methods can be either Bottom-Up where a fault effect on the system is estimated in terms of cause–consequence, severity and probability, e.g., Failure Mode, Effects & Criticality Analysis (FMECA) [13], or TopDown, where the determination of faults induces identified unwanted effects, e.g., Fault Tree Analysis (FTA) [14], Hazard and Operability Analysis (HAZOP) [15]).

Related Work
STPA Principals
STPA as Component of a Decision Making Tool
Markov Decision Processes
State Uncertainty in Markov Decision Processes
STPA-Related POMDP Formulation
Prompting on Different Safety Levels
Application of STPA with POMDP in Collaborative Tasks
Process Analysis
Identification of System-Level Hazards
Feasibility Study
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call