Abstract

Many medical image processing applications rely on targeted regions of interest within a larger volumetric image. Whole-body scans represent an extreme case in which large volumes must be broken into smaller sub-volumes for regional analysis. In this work, we sought automatic solutions to divide medical X-ray computed tomography (CT) images into six main anatomical regions: head, neck, chest, abdomen, pelvis and legs. We implemented and compared three methods: (1) an analytical approach which does not require training and solely relies on utilizing critical points in image intensity profiles to derive cut-planes that divide the scan into the mentioned regions, (2) a classical convolutional neural network (CNN) approach, which classifies each transaxial 2D plane independently and then concatenates classification results, and (3) CNN followed by a context-based correction algorithm (CBCA) which improves the CNN classification using positional relationships between all CT slices. The analytical approach achieved acceptable accuracy for anatomical region segmentation without the need for explicit data labeling and was effective for batch labeling whole-body CTs, greatly reducing manual labeling efforts. CNNs achieved superior accuracy and allowed for rapid development and training, but required labeled data and were susceptible to produce discontinuous anatomical regions and therefore ambiguous anatomical boundaries. Post hoc correction of CNN results using CBCA overcame these limitations, achieving nearly perfect CT slice labeling and anatomical region segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call