Abstract

It is a routine procedure to use Iterative Closest Point (ICP) algorithm to correct motion in medical imaging of the head. However, the basic assumption of ICP is the rigid body motion, and it is not strictly valid when there is non-rigid motion (facial expression). In this study, we propose a novel method to reduce the adverse effects of facial expressions on head motion estimation. First, we design a network named DFG-Net to generate a deformation field on the RGB domain and compute a confidence map. We then extend the original ICP by embedding the confidence map representing each point’s degree of non-rigid motion. Compared to other techniques, our proposed method eliminates the negative effects of facial expression while making the most of pose information. The experiments demonstrate that our approach reduces the position error by 19.23% and orientation error by 36.37% compared to the original ICP algorithm. They also show that our method is robust to variations of expressions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.