Abstract
AbstractWe present GazeRadar, a novel radiomics and eye gaze-guided deep learning architecture for disease localization in chest radiographs. GazeRadar combines the representation of radiologists’ visual search patterns with corresponding radiomic signatures into an integrated radiomics-visual attention representation for downstream disease localization and classification tasks. Radiologists generally tend to focus on fine-grained disease features, while radiomics features provide high-level textural information. Our framework first ‘fuses’ radiomics features with visual features inside a teacher block. The visual features are learned through a teacher-focal block, while the radiomics features are learned through a teacher-global block. A novel Radiomics-Visual Attention loss is proposed to transfer knowledge from this joint radiomics-visual attention representation of the teacher network to the student network. We show that GazeRadar outperforms baseline approaches for disease localization and classification tasks on 4 large scale chest radiograph datasets comprising multiple diseases. Code: https://github.com/bmi-imaginelab/gazeradar.KeywordsDisease localizationEye-gazeFusionRadiomics
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.