Abstract
Deep learning has been widely applied in the fields of image classification and segmentation, while adversarial attacks can impact the model's results in image segmentation and classification. Especially in medical images, due to constraints from factors like shooting angles, environmental lighting, and diverse photography devices, medical images typically contain various forms of noise. In order to address the impact of these physically meaningful disturbances on existing deep learning models in the application of burn image segmentation, we simulate attack methods inspired by natural phenomena and propose an adversarial training approach specifically designed for burn image segmentation. The method is tested on our burn dataset. Through the defensive training using our approach, the segmentation accuracy of adversarial samples, initially at 54%, is elevated to 82.19%, exhibiting a 1.97% improvement compared to conventional adversarial training methods, while substantially reducing the training time. Ablation experiments validate the effectiveness of individual losses, and we assess and compare training results with different adversarial samples using various metrics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.