Abstract

In addition to model fairness, another serious drawback of using ML (machine learning) models to address practical problems in maritime transport is related to the black-box property of most ML models: despite their success in many real-world applications to not only the maritime industry but also other industries including defense, medicine, finance, and law, thanks to their high prediction performance, they are opaque in terms of explainability, which makes users and even developers difficult to understand, trust, and manage such powerful AI (artificial intelligence) applications. Especially, when decisions derived from black-box systems based on ML models affect human's life, safety, and the environment, there is a more urgent need for explaining and understanding how such decisions are furnished by AI methods. Especially, in the relatively traditional and conservative maritime industry, decision-makers and stakeholders are more likely to be reticent to adopt decision support tools powered by new technologies such as AI which they can hardly interpret, control, and thus trust. This chapter aims to first introduce the necessity of explaining black-box ML models in the maritime industry, especially in the case of PSC (port state control). Then, popular methods to achieve black-box model explanation are introduced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call