Abstract

Deep learning algorithms have been successfully used in the field of medical image analysis and have greatly improved application of intelligent algorithms to medical diagnosis. However, existing deep-learning-based diagnostic methods still suffer from several drawbacks: (1) In most medical image multi-tasking methods, focus segmentation and disease classification are often performed linearly, resulting in excessive reliance on the final results of focus segmentation. (2) The computational cost of the traditional attention mechanism for performing the segmentation task is very high and the convolutional architecture cannot be used to model long-distance dependencies, which in turn affects the segmentation accuracy. To address these issues, we propose a disease diagnosis and lesion segmentation model, Dual-Branch with Transformer Axial-attention Segmentation Net (DB-TASNet). DB-TASNet is built by the DenseNet-121 classification network and U-Net segmentation network improved using an axial-attention transformer model. Moreover, DB-TASNet also includes a lesion integration module to integrate segmentation results with the classification network in order to increase its attention to lesions and improve the diagnosis results. Experimental results on the Pneumothorax dataset provided by the Society for Imaging Informatics in Medicine (SIIM) show that the average AUC of the DB-TASNet classification task reaches 0.939, and the DICE coefficient of the segmentation task reaches 0.886. Such performance suggests that the proposed model may provide an efficient and effective diagnosis tool for medical personnel.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.