Abstract

Hand osteoarthritis (OA) severity can be assessed visually through radiographs using semi-quantitative grading systems. However, these grading systems are subjective and cannot distinguish minor differences. Joint space width (JSW) compensates for these disadvantages, as it quantifies the severity of OA by accurately measuring the distances between joint bones. Current methods used to assess JSW require users’ interaction to identify the joints and delineate the initial joint boundary, which is time-consuming. To automate this process and offer a robust measurement for JSW, we proposed two novel methods to measure JSW: (1) The segmentation-based (SEG) method, which uses traditional computer vision techniques to calculate JSW; (2) The regression-based (REG) method, which is a deep learning approach employing a modified VGG-19 network to predict JSW. On a dataset with 3591 hand radiographs, 10,845 DIP joints were cut as ROI and served as input to the SEG and REG methods. The bone masks of the ROI images generated by a U-Net model were sent as input in addition to the ROIs. The ground truth of JSW was labeled by a trained research assistant using a semi-automatic tool. Compared with the ground truth, the REG method achieved a correlation coefficient (r) of 0.88 and a mean square error (MSE) of 0.02 mm on the testing set; the SEG method achieved a correlation coefficient of 0.42 and an MSE of 0.15 mm. Results show the REG method has promising performance in JSW measurement and, in general, Deep Learning approaches can facilitate the automatic quantification of distance features in medical images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call