Abstract

Maximum intensity projection (MIP) is a standard volume-rendering technique for 3D volumetric data processing. For example, given a 3D CT data, it simply projects the voxel values with its maximum intensity on a specific view to output a 2D image. Recently, MIP is further combined with Btrfly Net for vertebrae labelling task. However, this simple reformations of 3D data leads to loss of rich context information in volumetric data. In this paper, we propose a learned orthographic pooling approach instead of image processing based MIP. Typically, a simple conv-simple and bottleneck pooling modules are introduced to learn the orthographic projection of 3D data and output 2D intermediate feature maps. To this end, the learned orthographic pooling helps preserve detail information of 3D context during projection. Furthermore, an unified Btrfly Net is provided for vertebrae labelling by integrating the orthographic pooling sub-network. The novel Btrfly Net with orthographic pooling sub-network is evaluated on the 2014 MICCAI vertebra localization challenge dataset. Compared to original Butfly Net with MIP, orthographic pooling, the learned MIP largely boosts the performance of vertebrae labelling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call