Abstract

Wireless capsule endoscope (WCE) has been verified in clinical medicine for many years. However, the detection process needs experienced doctors to read the film manually for a long time. In addition, the cost of the endoscope itself leads to a high cost of WCE detection and overall cycle is long. New research method based on deep learning technology with robustness and high accuracy can reduce the detection cost and benefit the public. According to the characteristics of small intestine lesion, a method focuses on labeling and feature detection which can optimize the process by analyzing small intestine WCE image and experimental comparison. Based on the YOLOv3 detection network, retaining the original basic feature of detection network, an improved one is further optimized and effectively verified. Finally, the redundant images are filtered out by comparing the Hash value of images, presenting the final concise detection results for doctors. Starting from image labeling, the design of deep learning network structure for the image of small intestine digestive tract endoscope is studied, which can effectively improve intelligent detection computer-aided clinical application of WCE, with higher accuracy and lower missing detection rate than manual detection.

Highlights

  • According to the statistics report in 2016 released by the American Cancer Association, the incidence of digestive tract cancer in the United States is about 305000, including 132000 women and 173000 men; the death toll caused by digestive tract diseases has reached 153000, including 64000 women and 89000 men [1]

  • FOR MULTI SORT SMALL INTESTINE LESIONS BASED ON Wireless capsule endoscope (WCE) In view of the problems existing in the auxiliary diagnosis of small intestinal diseases and the current situation of detection based on machine learning, a new method based on WCE image were designed

  • IMAGE LABELING BASED ON THE CHARACTERISTICS OF SMALL INTESTINE LESIONS The reason why deep learning can make a breakthrough depends on the complex network structure and massive data

Read more

Summary

INTRODUCTION

According to the statistics report in 2016 released by the American Cancer Association, the incidence of digestive tract cancer in the United States is about 305000, including 132000 women and 173000 men; the death toll caused by digestive tract diseases has reached 153000, including 64000 women and 89000 men [1]. B. RESEARCH STATUS OF WCE FOCUS DETECTION BASED ON TARGET DETECTION METHOD In recent years, with the development of machine learning, image-based detection of small intestinal lesions has been paid more and more attention. A. IMAGE LABELING BASED ON THE CHARACTERISTICS OF SMALL INTESTINE LESIONS The reason why deep learning can make a breakthrough depends on the complex network structure and massive data. YOLOv3 is selected as the basic network of small intestine lesions detection and optimized It is an important development direction of the target detection network that the pyramid feature model is constructed firstly, and the bottom feature map and the top feature map are effectively fused to achieve better detection effect. The network structure is shown in Fig. 5: According to the size of small intestine lesions, the labeled size of samples and the detection requirements, the output of small objects is removed. The detection accuracy and adaptability were improved, and the output was reduced in the detection phase of the network, and the prior frame is compressed to 1/4 of the original, but it can make more accurate prediction, improve the overall detection speed and reduce the utilization rate of video memory

EXPERIMENTS
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.