Abstract

Blessed by vast amounts of data, learning-based methods have achieved remarkable performance in countless tasks in computer vision and medical image analysis. Although these deep models can simulate highly nonlinear mapping functions, they are not robust with regard to the domain shift of input data. This is a significant concern that impedes the large-scale deployment of deep models in medical images since they have inherent variation in data distribution due to the lack of imaging standardization. Therefore, researchers have explored many domain generalization (DG) methods to alleviate this problem. In this work, we introduce a Hessian-based vector field that can effectively model the tubular shape of vessels, which is an invariant feature for data across various distributions. The vector field serves as a good embedding feature to take advantage of the self-attention mechanism in a vision transformer. We design paralleled transformer blocks that stress the local features with different scales. Furthermore, we present a novel data augmentation method that introduces perturbations in image style while the vessel structure remains unchanged. In experiments conducted on public datasets of different modalities, we show that our model achieves superior generalizability compared with the existing algorithms. Our code and trained model are publicly available at https://github.com/MedICL-VU/Vector-Field-Transformer.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.