Prostate cancer classification plays a pivotal role in the diagnosis and treatment of this disease. In this paper, we present a novel approach called the texture graph transformer, which combines texture analysis techniques, graph-based representations, and multi-head attention to enhance the accuracy of prostate cancer classification. Our texture graph transformer adeptly captures intricate texture details and patterns within prostate cancer lesion regions, efficiently improving the model’s ability to discern between different cancer types. By leveraging graph-based representations, our model constructs texture graphs that effectively capture the spatial and textural relationships among various regions in MRI images, enabling comprehensive feature extraction. The incorporation of a texture graph multi-head attention further enhances the model’s capacity to capture long-range dependencies and global context, with a specific focus on informative regions. Through extensive experiments conducted on a dataset comprising 226 MRI images, we demonstrate the efficacy of our proposed approach in significantly improving the performance of prostate cancer classification.
Read full abstract