Abstract

To solve the rotational changes in matching localization of an underwater terrain image, this letter proposes the ring-masked attention network (RMANet), a model-driven deep network for rotational template-matching tasks. Since traditional convolutional neural networks cannot effectively encode rotational changes, we introduce a rotation-equivariant network to extract the rotation-equivariant features. This network determines the rotation of an image at the pixel level. Based on the rotation-equivariant features, we propose the ring-masked attention module (RMAM), which combines the idea of the ring projection transform with an attention mechanism to extract the rotation-invariant features that are independent of the orientation. The overall model combines the rotation-equivariant network with RMAM into an end-to-end network that can exploit both the feature-representation capability of the learning-based model and domain knowledge. Our experimental results show that, compared with popular approaches targeting rotational matching tasks, RMANet achieves performance gains in terms of both matching accuracy and running speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call