Abstract

Fully supervised learning methods necessitate a substantial volume of labelled training instances, a process that is typically both labour-intensive and costly. In the realm of medical image analysis, this issue is further amplified, as annotated medical images are considerably more scarce than their unlabelled counterparts. Consequently, leveraging unlabelled images to extract meaningful underlying knowledge presents a formidable challenge in medical image analysis. This paper introduces a simple triple-view unsupervised representation learning model (SimTrip) combined with a triple-view architecture and loss function, aiming to learn meaningful inherent knowledge efficiently from unlabelled data with small batch size. With the meaningful representation extracted from unlabelled data, our model demonstrates exemplary performance across two medical image datasets. It achieves this using only partial labels and outperforms other state-of-the-art methods. The method we present herein offers a novel paradigm for unsupervised representation learning, establishing a baseline that is poised to inspire the development of more intricate SimTrip-based methods across a spectrum of computer vision applications. Code and user guide are released at https://github.com/JerryRollingUp/SimTripSystem, the system also runs at http://43.131.9.159:5000/.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call