Abstract

It is a routine procedure to use Iterative Closest Point (ICP) algorithm to correct motion in medical imaging of the head. However, the basic assumption of ICP is the rigid body motion, and it is not strictly valid when there is non-rigid motion (facial expression). In this study, we propose a novel method to reduce the adverse effects of facial expressions on head motion estimation. First, we design a network named DFG-Net to generate a deformation field on the RGB domain and compute a confidence map. We then extend the original ICP by embedding the confidence map representing each point’s degree of non-rigid motion. Compared to other techniques, our proposed method eliminates the negative effects of facial expression while making the most of pose information. The experiments demonstrate that our approach reduces the position error by 19.23% and orientation error by 36.37% compared to the original ICP algorithm. They also show that our method is robust to variations of expressions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call