The recent advent of deep learning (DL) has enabled data-driven models to pave the way for the full exploitation of rich image datasets from which physics can be learnt. Here we propose a novel data-driven image mechanics (D2IM) approach that learns from digital volume correlation (DVC) displacement fields of vertebrae, predicting displacement and strain fields from undeformed X-ray computed tomography (XCT) images. D2IM successfully predicted the displacements in all directions, particularly in the cranio-caudal direction of the vertebra, where high correlation (R2=0.94) and generally minimal errors were obtained compared to the measured displacements. The predicted axial strain field in the cranio-caudal direction of the vertebra was also consistent in distribution with the measured one, displaying generally reduced errors in the regions within the vertebral body. The application of D2IM to lower resolution imaging in initial testing provides promising results indicating the future viability of integrating this technology into a clinical setting. This is the first study using experimental full-field measurements on bone structures from DVC to inform DL-based models such as D2IM, which represents a major contribution in the prediction of displacement and strain fields based only on the greyscale content of undeformed XCT images. In future, D2IM will incorporate a range of biological structures and loading scenarios for accurate prediction of physical fields, aiming at clinical translation for improved diagnostics. Data AvailabilityCode for preparing dataset, training D2IM model and visualising/analysing results has been hosted on GitHub: https://github.com/PeterSoar/D2IM_PrototypeThe dataset used for this study can be found on Figshare: https://doi.org/10.6084/m9.figshare.25404220.v1
Read full abstract