Abstract
Early diagnosis of tumor plays an important role in the improvement of treatment and survival rate of patients. However, breast tumors are difficult to be diagnosed by invasive examination, so medical imaging has become the most intuitive auxiliary method for breast tumor diagnosis. Although there is no universal perfect method for image segmentation so far, the consensus on the general law of image segmentation has produced considerable research results and methods. In this context, this paper focuses on the breast tumor image segmentation method based on CNN and proposes an improved DCNN method combined with CRF. This method can obtain the information of multiscale and pixels better. The experimental results show that, compared with DCNN without these methods, the segmentation accuracy is significantly improved.
Highlights
Based on this technology, it carries out practical research on the classification and location of breast tumors and proposes a breast tumor image segmentation algorithm combining CRF and DCNN. e experimental results Journal of Healthcare Engineering show that, compared with the classical breast tumor image segmentation technology based on CNN, this method can better obtain the multiscale and interpixel information and has certain advantages
The performance of the target method is quantitatively evaluated by three indicators, namely, dice similarity coefficient (DSC), sensitivity, and specificity (PPV). eir formula is as follows: DSC 2|P∧T|, |P| +|T|
By comparing with other experimenters’ methods, it can be found that our method is in the upper middle level in recent breast tumor image segmentation methods (Table 1)
Summary
Erefore, the construction of regional gray difference evaluation function can reflect the similarity between different regions in the feature of brightness, which is one of the influencing factors of subsequent merging criteria. E traditional evaluation function of regional gray difference is as follows: δRi, Rj. In digital images, image regions have multidimensional attributes. If the similarity between regions is judged only according to the gray difference of image regions, more important information will be lost in the final merging result. Erefore, we need to select other attributes of the region to construct the difference evaluation function and gray difference value to form a composite regional similarity evaluation function in order to achieve better merging effect. E formula of intermediate Gaussian distribution function is as follows: A(x) e− ((x− a)/σ)2 , − ∞ < x < + ∞
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have