You have accessJournal of UrologyCME1 Apr 2023MP09-06 ASSESSMENT OF A NOVEL BPMRI-BASED MACHINE LEARNING FRAMEWORK TO AUTOMATE THE DETECTION OF CLINICALLY SIGNIFICANT PROSTATE CANCER USING THE PI-CAI (PROSTATE IMAGING: CANCER AI) CHALLENGE DATASET Andre Luis Abreu, Giovanni Cacciamani, Masatomo Kaneko, Vasileios Magouliantis, Yijing Yang, Vinay Duddalwar, C-C Jay Kuo, Inderbir Gill, and Chrysostomos L. Nikias Andre Luis AbreuAndre Luis Abreu More articles by this author , Giovanni CacciamaniGiovanni Cacciamani More articles by this author , Masatomo KanekoMasatomo Kaneko More articles by this author , Vasileios MagouliantisVasileios Magouliantis More articles by this author , Yijing YangYijing Yang More articles by this author , Vinay DuddalwarVinay Duddalwar More articles by this author , C-C Jay KuoC-C Jay Kuo More articles by this author , Inderbir GillInderbir Gill More articles by this author , and Chrysostomos L. NikiasChrysostomos L. Nikias More articles by this author View All Author Informationhttps://doi.org/10.1097/JU.0000000000003224.06AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookLinked InTwitterEmail Abstract INTRODUCTION AND OBJECTIVE: To develop a novel biparametric magnetic resonance imaging (bpMRI)-based machine learning (ML) framework to automatically detect prostate lesions using the PI-CAI (Prostate Imaging: Cancer AI) Challenge Dataset. PI-CAI is an annotated multi-center dataset of 1300 bpMRIs publicly available to validate ML algorithms performance at csPCa (Grade Group ≥2 ) detection and diagnosis. METHODS: We used the Green Learning paradigm for feature extraction, which offers a lightweight model size and explainable feature extraction process. We use the IPHOP-II method that decomposes the input into a spatial-spectral representation, where the discriminant dimensions are kept through feature selection and eventually fed to the classifier. IPHOP-II takes as input T2W, ADC and DWI high b-value to generate per voxel features for classification. The classifier is trained to discern between csPCa and non-csPCa. After training, the ML detector was tested to predict voxel-wise probabilities for csPCa on 100 bpMRIs (unseen). Per-lesion Average Precision metric (AP) and average per-Patient ROC (Area Under the Curve - Receiver Operating Characteristic - AUROC) for the presence of csPCa are used to evaluate our detector. The PI-CAI score is the average of AP and AUROC. Results were also stratified accordingly to PSA density. RESULTS: A total of 1200 bpMRI were used for training the ML model and tested on 100 bpMRIs (Figure 1). In the testing set, the average AP for csPCa detection per lesion was 0.47, with a sensitivity of 0.61 precision 0.42 rate at Youden’s index cutoff. On the patient-level analysis, the AUROC was 0.81 and a sensitivity of 0.92 at a 0.43 false positive rate. The PI-CAI score was 0.64. In the subset analysis considering only patients with PSA density ≥0.2ng/ml2 we found an improvement in the classification task with the AUROC was 0.83 and a sensitivity of 0.94 at a lower 0.41 false positive rate. CONCLUSIONS: These results show the feasibility of a novel bpMRI-based ML framework to automatically detect csPCa. The proposed ML framework, supported by clinical variables could be tested to assist healthcare providers in improving the performance of automated csPCa detection. Source of Funding: None © 2023 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 209Issue Supplement 4April 2023Page: e106 Advertisement Copyright & Permissions© 2023 by American Urological Association Education and Research, Inc.MetricsAuthor Information Andre Luis Abreu More articles by this author Giovanni Cacciamani More articles by this author Masatomo Kaneko More articles by this author Vasileios Magouliantis More articles by this author Yijing Yang More articles by this author Vinay Duddalwar More articles by this author C-C Jay Kuo More articles by this author Inderbir Gill More articles by this author Chrysostomos L. Nikias More articles by this author Expand All Advertisement PDF downloadLoading ...
Read full abstract