Abstract
Acquisition of a standard section is a prerequisite for ultrasound diagnosis. For a long time, there has been a lack of clear definitions of standard liver views because of physician experience. The accurate automated scanning of standard liver sections, however, remains one of ultrasonography medicine's most important issues. In this article, we enrich and expand the classification criteria of liver ultrasound standard sections from clinical practice and propose an Ultra-Attention structured perception strategy to automate the recognition of these sections. Inspired by the attention mechanism in natural language processing, the standard liver ultrasound views will participate in the global attention algorithm as modular local images in computer vision of ultrasound images, which will significantly amplify small features that would otherwise go unnoticed. In addition to using the dropout mechanism, we also use a Part-Transfer Learning training approach to fine-tune the model's rate of convergence to increase its robustness. The proposed Ultra-Attention model outperforms various traditional convolutional neural network-based techniques, achieving the best known performance in the field with a classification accuracy of 93.2%. As part of the feature extraction procedure, we also illustrate and compare the convolutional structure and the Ultra-Attention approach. This analysis provides a reasonable view for future research on local modular feature capture in ultrasound images. By developing a standard scan guideline for liver ultrasound-based illness diagnosis, this work will advance the research on automated disease diagnosis that is directed by standard sections of liver ultrasound.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.