Abstract

Autonomous marine systems may switch between various operational modes with different levels of autonomy (LoA), due to a rapidly changing environment and the complex nature of tasks. The dynamic autonomy brings an additional layer of complexity to ensuring safe marine operations, but this functionality is not sufficiently considered in current risk analysis methods. Hence, this paper proposes an approach to hazard identification based on the system theoretic process analysis (STPA) that includes unsafe transitions between different LoA in systems. A case study of a remotely operated vehicle (ROV) with four operational modes with different LoAs is used to illustrate the approach. The results show that the proposed approach contributes to: 1) communicating a shift of responsibilities among human operator and system controller in different operational modes by specifying how the allocation of the responsibility between human operators and the controller changes, and what updated process model of the operator and the controller are to ensure a successful transition; 2) refining safety constraints to be more concrete to improve system design, and operational procedures and 3) identifying triggering events for marine system modes’ transitions to handle environmental interaction systematically and sufficiently.

Highlights

  • Technological developments in software and hardware have led to a rapid increase in autonomous functionality in several systems and ap­ plications

  • This paper proposes an approach that uses the Systems-Theoretic Process Analysis (STPA) as a foundation and further expands it for autonomous functionality, with a particular focus on dynamic levels of autonomy (LoA) resulted from mode shifting in operation

  • We proposed an approach based on STPA and a simplified state transition diagram in which a more explicit model of operational mode transitions is provided

Read more

Summary

Introduction

Technological developments in software and hardware have led to a rapid increase in autonomous functionality in several systems and ap­ plications. A desired outcome of autonomy is the development of systems that operate in a more cost-effective and safe manner. Autonomy means that the system has the “ability of integrated sensing, perceiving, analysing, communicating, planning, decisionmaking and acting to achieve the goals assigned by human operators through designed human-machine interface” (Utne et al, 2017a). Autonomous systems may have different levels of autonomy (LoA), and there are different classifications of LoAs. In this paper, we adopt the definition from Ludvigsen and Sørensen (2016) and Utne et al (2017b), which classifies autonomous operations into four levels: (i) automatic operation (remote operation), (ii) management by consent, (iii) semi-autonomous or management by exception, and (iv) highly autonomous

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call