The human facial bone is made up of many complex structures, which makes it challenging to accurately analyze fractures. To address this, we developed advanced image analysis software which segments and quantifies spaces between fractured bones in facial CT images at the pixel level. This study used 3D CT scans from 1766 patients who had facial bone fractures at a university hospital between 2014 and 2020. Our solution included a segmentation model which focuses on identifying the gaps created by facial bone fractures. However, training this model required costly pixel-level annotations. To overcome this, we used a stepwise annotation approach. First, clinical specialists marked the bounding boxes of fracture areas. Next, trained specialists created the initial pixel-level unrefined ground truth by referencing the bounding boxes. Finally, we created a refined ground truth to correct human errors, which helped improve the segmentation accuracy. Radiomics feature analysis confirmed that the refined dataset had more consistent patterns compared with the unrefined dataset, showing improved reliability. The segmentation model showed significant improvement in the Dice similarity coefficient, increasing from 0.33 with the unrefined ground truth to 0.67 with the refined ground truth. This research introduced a new method for segmenting spaces between fractured bones, allowing for precise pixel-level identification of fracture regions. The model also helped with quantitative severity assessment and enabled the creation of 3D volume renderings, which can be used in clinical settings to develop more accurate treatment plans and improve outcomes for patients with facial bone fractures.
Read full abstract