Abstract
In this paper, we propose an encoder-decoder neural network model referred to as DeepBinaryMask for video compressive sensing. In video compressive sensing one frame is acquired using a set of coded masks (sensing matrix) from which a number of video frames, equal to the number of coded masks, is reconstructed. The proposed framework is an end-to-end model where the sensing matrix is trained along with the video reconstruction. The encoder maps a video block to compressive measurements by learning the binary elements of the sensing matrix. The decoder is trained to map the measurements from a video patch back to a video block via several hidden layers of a Multi-Layer Perceptron network. The predicted video blocks are stacked together to recover the unknown video sequence. The reconstruction performance is found to improve when using the trained sensing mask from the network as compared to other mask designs such as random, across a wide variety of compressive sensing reconstruction algorithms. Finally, our analysis and discussion offers insights into understanding the characteristics of the trained mask designs that lead to the improved reconstruction quality.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.