BackgroundTo develop an effective radiological software prototype that could read Digital Imaging and Communications in Medicine (DICOM) files, crop the inner ear automatically based on head computed tomography (CT), and classify normal and inner ear malformation (IEM). MethodsA retrospective analysis was conducted on 2053 patients from 3 hospitals. We extracted 1200 inner ear CTs for importing, cropping, and training, testing, and validating an artificial intelligence (AI) model. Automated cropping algorithms based on CTs were developed to precisely isolate the inner ear volume. Additionally, a simple graphical user interface (GUI) was implemented for user interaction. Using cropped CTs as input, a deep learning convolutional neural network (DL CNN) with 5-fold cross-validation was used to classify inner ear anatomy as normal or abnormal. Five specific IEM types (cochlear hypoplasia, ossification, incomplete partition types I and III, and common cavity) were included, with data equally distributed between classes. Both the cropping tool and the AI model were extensively validated. ResultsThe newly developed DICOM viewer/software successfully achieved its objectives: reading CT files, automatically cropping inner ear volumes, and classifying them as normal or malformed. The cropping tool demonstrated an average accuracy of 92.25%. The DL CNN model achieved an area under the curve (AUC) of 0.86 (95% confidence interval: 0.81-0.91). Performance metrics for the AI model were: accuracy (0.812), precision (0.791), recall (0.8), and F1-score (0.766). ConclusionThis study successfully developed and validated a fully automated workflow for classifying normal versus abnormal inner ear anatomy using a combination of advanced image processing and deep learning techniques. The tool exhibited good diagnostic accuracy, suggesting its potential application in risk stratification. However, it is crucial to emphasize the need for supervision by qualified medical professionals when utilizing this tool for clinical decision-making.
Read full abstract