Abstract
Targeting 3D image reconstruction and depth sensing, a desirable feature for complementary metal oxide semiconductor (CMOS) image sensors is the ability to detect local light incident angle and the light polarization. In the last years, advances in the CMOS technologies have enabled dedicated circuits to determine these parameters in an image sensor. However, due to the great number of pixels required in a cluster to enable such functionality, implementing such features in regular CMOS imagers is still not viable. The current state-of-the-art solutions require eight pixels in a cluster to detect local light intensity, incident angle and polarization. The technique to detect local incident angle is widely exploited in the literature, and the authors have shown in previous works that it is possible to perform the job with a cluster of only four pixels. In this work, the authors explore three novelties: a mean to determine three of four Stokes parameters, the new paradigm in polarization cluster-pixel design, and the extended ability to detect both the local light angle and intensity. The features of the proposed pixel cluster are demonstrated through simulation program with integrated circuit emphasis (SPICE) of the regular Quadrature Pixel Cluster and Polarization Pixel Cluster models, the results of which are compliant with experimental results presented in the literature.
Highlights
There have been increases in the demand for new multimedia resources of three-dimensional content, in images, games, movies and augmented reality
A compact hybrid pixel cluster was presented with the capacity to detect local light intensity, incident angle, and local cluster light polarization, which enables to determine
The proposed hybrid pixel cluster embeds the functionality of the two-pixel clusters intensity, local incident angle, and local light polarization, which enables it to determine Stokes previously presented in the literature, the polarization pixel cluster and the quadrature pixel cluster
Summary
There have been increases in the demand for new multimedia resources of three-dimensional content, in images, games, movies and augmented reality In this context, capturing 3D information or depth sensing is essential for many applications including object and material classification [1,2,3], navigation [4,5], image polarization contrast in biological tissues [6,7,8,9], improved vision in haze conditions [10] and diagnosis in oncology [11,12], different views of the same image [13,14], facial recognition in the surveillance area [15,16,17,18], atmospheric remote sensing and other applications. Disadvantages of the previous proposed within the literature, including Time-of-Flight (ToF) [19,20], multi-apertures [21,22,23], techniques include additional laser source and time to process the laser signal in ToF, a large amount of Talbot’s diffraction pixels set [24,25,26], division of amplitude [27], division of focal plane microgrid micro-lens a very large imager array the case of multi-apertures image sensors,
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.