Abstract

A sparse camera array utilizes several identical cameras for multi-frame super-resolution. However, the influence of camera layout on super-resolution performance remains unclear. In conventional camera arrays, all cameras share a similar observation model, resulting in redundancy during information collection. This study presents a novel super-resolution framework that actively controls cameras in a sparse camera array to rotate to certain degrees. Through a detailed analysis of the forward imaging model, it is shown that the rotation of cameras uses the rotational asymmetry property of pixel layouts and shape. The proposed camera layout method provides more diverse and informative spatial sampling than a conventional camera array with only translations between cameras. A physics-enabled deep neural network was introduced for multi-frame super-resolution. With the explicitly defined gradients and affine models as guidance, it can effectively decode additional information collected by the rotational camera array and overcome the large-scale spatial feature alignment problem. The experimental results on synthetic and real prototype datasets validated the proposed theory and solution. We believe our study can inspire innovations for future task-specific camera array system design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call