Deep learning methods can recognize the various underwater acoustic targets in complex marine environments. In this study, we apply deep learning methods in decision recognition of underwater acoustic targets. However, decision recognition methods rely on a complex evidence theory that combines the probabilities of various scenarios as evidence sources, and in the process of combination produces conflicting evidence. For this reason, we propose an underwater acoustic target recognition method based on weighted average Dempster -Shafer (WA-DS) decision fusion, which replaces anomalous evidence with weighted average evidence, and solves the problem of synthesizing anomalous evidence. Firstly, the underwater acoustic dataset is preprocessed with downsampling, denoising, and audio segmentation. Then, the Mel-frequency cepstral coefficients (MFCC), Gammatone frequency cepstral coefficients (GFCC), and time–frequency (TF) features are extracted, and the category probabilities are output using the ResNet18 network to learn and train the multi-feature and then output the category probabilities. Finally, the category probabilities of multi-feature are computed as evidence using the Basic Probability Assignment (BPA) function, and the various types of evidence are fused using our decision fusion algorithm. Experimental results show that our decision fusion method is more capable of synthesizing anomalous evidence than traditional decision fusion methods. Our work achieves recognition accuracy of up to 98.34% on the ShipsEar dataset, which is a significant improvement in recognition rate compared to single features.