Abstract

Image processing is the main topic of discussion in the field of computer vision technology. With the increase in the number of images used over time, the types of images with different resolution qualities are becoming more diverse. Low image resolution leads to uncertainty in the task of image processing. Therefore, a method with high performance is needed for image processing. In image processing, there is a Convolutional Neural Networks (CNN) architecture for semantic segmentation of pixels called U-Net. U-Net is formed by an encoder network and decoder network that will later produce segmented images. In this paper, researchers applied the U-Net architecture to the lung CT image dataset, which has different resolutions in each image, to segment the image that produces a segmented lung image. In this study, we conducted experiments for many training and testing data ratios while also comparing the model performances between the single resolution dataset and the multiresolution dataset. The results showed that the segmentation accuracy using a single resolution dataset is as follows: 5 to 5 ratio is 66.00%, 8 to 2 ratio is 88.96%, and 9 to 1 ratio is 94.47%. For the multiresolution dataset, the application is: 5 to 5 ratio is 82.42%, 8 to 2 ratio is 90.12%, and 9 to 1 ratio is 93.66%. And for the result, the training time using single resolution dataset are: 5 to 5 ratio is 59.94 seconds, 8 to 2 ratio is 87.16 seconds, and 9 to 1 ratio is 195.34 seconds, as for multiresolution data application are: 5 to 5 ratio is 49.60 seconds, 8 to 2 ratio is 102.08 seconds, and 9 to 1 ratio is 199.79 seconds. Based on those results, we obtained the best accuracy for single resolution at a 9:1 ratio and the best training time for multiresolution at a 5:5 ratio. Doi: 10.28991/ESJ-2023-07-02-014 Full Text: PDF

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.