Burnsnet: Burn Region Segmentation Network From Color Images With Two-Way CNN
Burn injury is a serious health issue leading to several thousands of annual fatalities. The color image-based automated burns diagnostic and assessment methods hold the potential for timely diagnosis and treatment. However, the research is limited in this domain which remains a major challenge. In this work, we explore and address the complex task of burn region segmentation in color images of burn patients. We present a semantic segmentation network that has two parallel sub-networks: a spatial-stream network for extracting low-level features and a contextual-stream network for generating a larger receptive field. Our network utilizes the pre-trained ResNet101 network, global average pooling, and instance normalization for better encoding and fusion of the network outputs. This dual-stream approach optimizes the performance in situations where data scarcity poses a challenge, facilitating robust semantic segmentation despite limited training samples. We prepared a pixel-wise labeled dataset for burn region segmentation and the experimental results on this dataset show that our proposed network outperforms several state-of-the-art semantic segmentation methods. Our method achieved mIOU and Matthews’ correlation coefficient (MCC) of 74.3% and 81.7%, respectively, approximately 4.5% higher than the second-best performing method. The Extended Burn Image Segmentation (EBIS) dataset and our model are available at https://github.com/VEDAs-Lab/EBIS