Abstract

Current standard of burn wound evaluation is based on digital photography of wounds examined by a burn specialist. Due to subjectivity of this approach, automated burn wound analysis systems are being developed by researchers. Those systems should contain three major components: segmentation of burn images, feature extraction, and classification of segmented regions into healthy skin, burned skin, and background. The first purpose of this study is to examine various methods in each of these steps and to achieve the best combination. Comparing the performance of segmentation-based classification approach versus deep learning is the second goal of the study. SegNet-based semantic segmentation was implemented as a deep learning approach. The best combination to successfully classify the images into skin, burn, and background regions was found to be the fuzzy c-means algorithm for the segmentation part, and a multilayer feed-forward artificial neural network trained by the back-propagation algorithm for the classification part. Having an F -score of 74.28% in the classification of images captured without a protocol, the proposed scheme managed to achieve similar results with deep learning, which had an F -score of 80.50%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call