Abstract

2D optical imaging systems cause irreparable stereo visual feature loss in the pixel-level semantic segmentation task, especially in challenging scenes. To overcome this limitation, light field imaging with microlens arrays can acquire the 4D multi angle subaperture images, providing a viable alternative to semantic segmentation. However, these images are redundant and difficult to process. Therefore, we propose a macro generalized epipolar-plane image (macro-GEPI) representation. The macro-GEPI is obtained by slicing the subaperture images, stacking the slice images in sequence, and finally aligning them. This new image representation can clearly highlight the spatial-angular interactive information of different objects in one image. Based on this, we propose a convolutional neural network (CNN) that can independently learn the light field spatial-angular information for semantic segmentation. We also present a light field dataset composed of 1840 macro-GEPIs with annotations. We have studied the proposed network and compared it to state-of-the-art (SOTA) algorithms. The experiments showed that our network outperformed SOTA algorithms on the macro-GEPI dataset and a benchmark light field dataset, especially in challenging scenes. Moreover, our algorithm achieved a high mean intersection over union of 90.83%, demonstrating the reliability of the light field macro-GEPI representation and the effectiveness of our CNN algorithm.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.