Abstract
The recent increase of artificial intelligence (AI) systems has led to several questions, including “Can AI assessments or predictions be trusted?” and “How can people be encouraged to accept AI-based recommendations?” These questions have led to the rise of explainable AI. This short talk presents a scenario in which AI could provide information about the seabed to a sonar operator. In this scenario, received ship noise is input to a deep learning model trained to predict a seabed class. The predicted seabed class is then used to calculate transmission loss (TL) as a function of range. These TL curves from the predicted seabed class are compared to those obtained using the seabed information stored in a database. This comparison could allow the sonar operator to evaluate the applicability of the seabed database for their present location. Adoption of this type of AI tool depends on the attitude of the sonar operator about the AI and its predictions. Ideas are given for how this attitude could be improved by adoption of explainable AI techniques. [Work supported by the Office of Naval Research grant #N00014-22-12402.]
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have