Abstract

In this paper, we proposed a new kind of mark points coded by color and a new quasi-ellipse detector on pixel level. This method is especially applicable to three-dimensional (3D) head panoramic reconstruction. Images of adjacent perspectives can be stitched by matching pasted color-coded mark points in overlap area to calculate the transformation matrix. This paper focuses on how the color-coded mark points work and how to detect and match corresponding points from different perspectives. Tests are performed to show the efficiency and accuracy of this method based on the original data obtained by structured light projection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call