Abstract

Saliency-driven mesh simplification methods have shown promising results in maintaining visual detail, but effective simplification requires accurate 3D saliency maps. The conventional mesh saliency detection method may not capture salient regions in 3D models with texture. To address this issue, we propose a novel saliency detection method that fuses saliency maps from multi-view projections of textured models. Specifically, we introduce a texel descriptor that combines local convexity and chromatic aberration to capture texel saliency at multiple scales. Furthermore, we created a novel dataset that reflects human eye fixation patterns on textured models, which serves as an objective evaluation metric. Our experimental results demonstrate that our saliency-driven method outperforms existing approaches on several evaluation metrics. Our method source code can be accessed at https://github.com/bkballoon/mvsm-fusion and the dataset can be accessed at 10.5281/zenodo.8131602.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call