Abstract

ObjectivesPeriapical radiographs are oftentimes taken in series to display all teeth present in the oral cavity. Our aim was to automatically assemble such a series of periapical radiographs into an anatomically correct status using a multi-modal deep learning model. Methods4,707 periapical images from 387 patients (on average, 12 images per patient) were used. Radiographs were labeled according to their field of view and the dataset split into a training, validation, and test set, stratified by patient. In addition to the radiograph the timestamp of image generation was extracted and abstracted as follows: A matrix, containing the normalized timestamps of all images of a patient was constructed, representing the order in which images were taken, providing temporal context information to the deep learning model. Using the image data together with the time sequence data a multi-modal deep learning model consisting of two residual convolutional neural networks (ResNet-152 for image data, ResNet-50 for time data) was trained. Additionally, two uni-modal models were trained on image data and time data, respectively. A custom scoring technique was used to measure model performance. ResultsMulti-modal deep learning outperformed both uni-modal image-based learning (p<0.001) and time-based learning (p<0.05). The multi-modal deep learning model predicted tooth labels with an F1-score, sensitivity and precision of 0.79, respectively, and an accuracy of 0.99. 37 out of 77 patient datasets were fully correctly assembled by multi-modal learning; in the remaining ones, usually only one image was incorrectly labeled. ConclusionsMulti-modal modeling allowed automated assembly of periapical radiographs and outperformed both uni-modal models. Dental machine learning models can benefit from additional data modalities. Clinical significanceLike humans, deep learning models may profit from multiple data sources for decision-making. We demonstrate how multi-modal learning can assist assembling periapical radiographs into an anatomically correct status. Multi-modal learning should be considered for more complex tasks, as clinically a wealth of data is usually available and could be leveraged.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call