Abstract

Machine learning approaches have significantly advanced the 3D medical images analysis, such as the CT and MRI scans, which enables improved diagnosis and treatment evaluation. These image volumes provide rich spatial context for understanding the internal brain and body anatomies. Typical medical image analysis tasks, such as segmentation, reconstruction and registration, are essential for characterizing this context. Related to 3D data formats, meshes, point clouds and others are used to represent the anatomical structures, each with unique applications. To better capture the spatial information and address data scarcity, self- and semi-supervised learning methods have emerged. However, efficient 3D representation learning remains challenging. Recently, Transformers have shown promise, leveraging the self-attention mechanisms that perform well on transfer learning and self-supervised methods. These techniques are applied for medical domains without extensive manual labeling. This work explores data-efficient models, scalable deep learning, semantic context utilization and transferability in 3D medical image analysis. We also evaluated the foundational models, self-supervised pre- training, transfer learning and prompt tuning, thus advancing this critical field.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call