Abstract

AbstractWe present a deep neural network‐based method that acquires high‐quality shape and spatially varying reflectance of 3D objects using smartphone multi‐lens imaging. Our method acquires two images simultaneously using a zoom lens and a wide angle lens of a smartphone under either natural illumination or phone flash conditions, effectively functioning like a single‐shot method. Unlike traditional multi‐view stereo methods which require sufficient differences in viewpoint and only estimate depth at a certain coarse scale, our method estimates fine‐scale depth by utilising an optical‐flow field extracted from subtle baseline and perspective due to different optics in the two images captured simultaneously. We further guide the SVBRDF estimation using the estimated depth, resulting in superior results compared to existing single‐shot methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call