Abstract

This work aimed to adopt a transformer model combined with deep learning neural network to discuss the segmentation of medical ultrasound images. A network combining a transformer model with a deep neural network model (ConvTrans-Net) is proposed. The image content is preselected based on a multilayer perceptron and attention mechanism, different feature vectors are concatenated and fed into the multilayer perceptron, and the results of multiple attentions are mapped to a larger dimensional space using a feed-forward network. The lesion areas segmented by ultrasonic scan were analysed, and an attention mechanism and multilayer perceptron were combined to preselect image content. The performance and convergence of the model were analysed, and the Jaccard similarity coefficient precision and recall of the model were measured. In the experiment, two different iterative step sizes were selected, the convergence trend of the model increased with the increase in the number of iterative steps, and the model gradually stabilized. The Jaccard of ConvTrans-Net was 85.21%. The precision (85.17%) and recall (89.65%) were significantly higher than those of EfficientNet and DeepViT-L, and the differences were significant (P < 0.05). The experimental results show that the proposed model is stable, and the combination of Transformer model and deep learning neural network has a good effect on ultrasound image segmentation, which has some practical application value.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call