Abstract

Although deep learning models have been widely used in medical imaging research field to perform lesion segmentation and classification tasks, several challenges remain to optimally apply deep learning models and improve model performance. The objective of this study is to investigate a new novel joint model and assess model performance improvement as increase of training dataset size. Specifically, we select and modify a novel J-Net as a joint model, which includes a two-way CNN architecture that combines a U-net model with an image classification model. A skin cancer dataset with 1200 images along with the annotated lesion masks and ground truth of “mild” and “severe” status is used. From this dataset, 11 subsets are randomly generated from 200 to 1200 images with an incremental rate of 100. Each subset is then divided into training, validation and testing groups using a ratio of 70:20:10, respectively. The performance of the new joint model is compared with two independent models to separately perform lesion segmentation and classification. The study results show when training the models using data subsets of 200 to 1200 images, accuracy levels increase from 0.80 to 0.92, or 0.86 to 0.95 in lesion segmentation. The lesion classification increases from 0.80 to 0.90, or 0.82 to 0.93 using two single models and one joint J-Net model, respectively. Thus, this study demonstrates that applying this new JNet joint model enables to achieve higher lesion segmentation and classification accuracy than two single models. Additionally, model performance also increases as increase of training dataset size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call