Abstract

We present a novel approach to render low resolution point clouds with multiple high resolution textures – the type of data typical from passive vision systems. The low precision, noisy, and sometimes incomplete nature of such data sets is not suitable for existing point-based rendering techniques that are designed to work with high precision and high density point clouds. Our new algorithm – view-dependent textured splatting (VDTS) – combines traditional splatting with a view-dependent texturing strategy to reduce rendering artifacts caused by imprecision or noise in the input data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call