Abstract

<p>Deep learning coupled with transfer learning, which involves reusing a pretrained model's network structure and parameter values, offers a rapid and accurate solution for image segmentation. Differing approaches exist in updating transferred parameters during training. In some studies, parameters remain frozen or untrainable (referred to as TL-S1), while in others, they act as trainable initial values updated from the first iteration (TL-S2). We introduce a new state-of-the-art transfer learning scenario (TL-S3), where parameters initially remain unchanged and update only after a specified cutoff time. Our research focuses on comparing the performance of these scenarios, a dimension yet unexplored in the literature. We simulate on three architectures (Dense-UNet-121, Dense-UNet-169, and Dense-UNet-201) using an ultrasound-based dataset with the left ventricular wall as the region of interest. The results reveal that the TL-S3 consistently outperforms the previous state-of-the-art scenarios, i.e., TL-S1 and TL-S2, achieving correct classification ratios (CCR) above 0.99 during training with noticeable performance spikes post-cutoff. Notably, two out of three top-performing models in the validation data also originate from TL-S3. Finally, the best model is the Dense-UNet-121 with TL-S3 and a 20% cutoff. It achieves the highest CCR for training 0.9950, validation 0.9699, and testing data 0.9695, confirming its excellence.</p>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.