Abstract

Most digital cameras use specialized autofocus sensors, such as phase detection, lidar or ultrasound, to directly measure focus state. However, such sensors increase cost and complexity without directly optimizing final image quality. This paper proposes a new pipeline for image-based autofocus and shows that neural image analysis finds focus 5-10x faster than traditional contrast enhancement. We achieve this by learning the direct mapping between an image and its focus position. In further contrast with conventional methods, AI methods can generate scene-based focus trajectories that optimize synthesized image quality for dynamic and three dimensional scenes. We propose a focus control strategy that varies focal position dynamically to maximize image quality as estimated from the focal stack. We propose a rule-based agent and a learned agent for different scenarios and show their advantages over other focus stacking methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.