Abstract

This Letter proposes an effective light-field 3D saliency object detection (SOD) method, which is inspired by the idea that the spatial and angular information inherent in a light-field implicitly contains the geometry and reflection characteristics of the observed scene. These characteristics can provide effective background clues and depth information for 3D saliency reconstruction, which can greatly improve the accuracy of object detection and recognition. We use convolutional neural networks (CNNs) to detect the saliency of each elemental image (EI) with different viewpoints in an elemental image array (EIA) and the salient EIA is reconstructed by using a micro-lens array, forming a 3D salient map in the reconstructed space. Experimental results show that our method can generate high-quality 3D saliency maps and can be observed simultaneously from different angles and positions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call