PurposeShape information contains descriptive knowledge for classification problems. A novel methodology for combining the valuable 3D and 3D + t shape information with the advantages of deep transfer learning is introduced. This method produces informative images from shape information, which are interpretable for deep CNNs previously trained using a large-scale image dataset. MethodsOur proposed structure can (1) generate 3D surfaces with optimal spatial resolution, (2) parametrize them using spherical harmonics shape descriptors (SPHARMs) with optimal frequency resolution, (3) employ registration to establish surface correspondences, and (4) convert the 3D corresponding point clouds to 2D images by spreading out the latitude and the longitude values of spherical components in the form of a flat coordinate system. The generated 2D images have the potential to feed into the pre-trained deep CNNs. The efficiency of the proposed method was assessed in myocardial infarction (MI) classification using two different datasets. ResultsFive well-known CNN architectures as fixed feature extractors were evaluated. Deep features derived from the output of convolutional and pooling layers were fed into the SVM classifier. We also explored the feasibility of adding additional information by assigning a color to each RGB channel. Subsequently, EfficinetNet-b0 reached the best performance, with an accuracy of 97.92% ± 1.47 using colored images containing additional information, respectively. ConclusionOur suggested approach by incorporating the transformed shape information and CNNs can be adopted as a robust computer-aided diagnosis system to assist clinicians in detecting various diseases such as MI.
Read full abstract