Abstract

AbstractDeep registration models have shown prominent online inference efficiency and competitive accuracies compared with traditional iterative optimization-based techniques. Most existing techniques rely on cascaded convolutional layers to parameterize the nonlinear mapping from volumetric images to dense displacement or velocity fields, introducing memory burden in model training, especially for high-resolution volumetric images. Moreover, the metric space alignment tends to fall into local minima when confronted with pose and shape perturbations, claiming prior affine transformation or sparse correspondence annotation. We propose an unsupervised deep supervoxel-wise correspondence network (SVNet) for dense correspondence of cone-beam computed tomography. In particular, we formulate the dense volumetric registration as solving the sparse supervoxel-wise permutation matrix. We design an unsupervised learning scheme for supervoxel descriptor learning and seek the optimal matching, relax the prior transformation or correspondence annotation. The proposed SVNet is optimized in an unsupervised manner by regularized appearance and geometric alignments regarding a supervoxel permutation matrix, aside from the metric space alignment of the volumetric images. Extensive experimental results showcase the effectiveness of the proposed approach in the supervoxel mapping with performance gains over compared deep registration models, avoiding memory-expensive training and computations.KeywordsSupervoxel mappingCorrespondenceCone-beam computed tomography

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.