Abstract
As an important part of cyber–physical–social intelligence, artificial intelligence (AI)-driven smart healthcare is committed to promoting the application of human–machine hybrid augmented intelligence in the medical field, including AI-assisted medical image analysis and lesion recognition. Among them, deep learning models represented by fully convolutional networks (FCNs) have achieved excellent performance in medical image segmentation. However, limited by the complex structure of segmentation networks and the inherently redundant characteristics of convolutional operation, the scale of these models is extremely large. To further promote the application of machine intelligence in the field of medical image analysis, we propose an attention U-Net based on Bi-ConvLSTM (AUBC-Net) for accurate segmentation of medical images in this article. Different from classical U-Net, the proposed model deals with the potential association between decoding features and encoding features by bidirectional convolution LSTM. Furthermore, for the inherent redundancy characteristics of FCNs, we propose a lightweight feature generation strategy and optimize the calculation process of Bi-ConvLSTM based on tensor multilinear algebra, which can greatly reduce the number of network parameters. In addition, we have conducted the image segmentation experiments on two benchmark medical datasets, and the experimental results demonstrate that the proposed model can not only achieve better performance than existing methods, but also effectively compress network parameters while ensuring performance, which greatly facilitates AI-driven smart medical applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.