Abstract

In monitoring applications, different views are needed to be captured by multi-view video sensor nodes for understanding the scene clearly. These multi-view sequences have large volume of redundant data which affects the storage, transmission, bandwidth and lifetime of wireless video sensor nodes. A low complex coding technique is required for addressing these issues and for processing multi-view sensor data. Hence, in this paper, a framework on CS-based multi-view video codec using frame approximation technique (CMVC-FAT) is proposed. Quantisation with entropy coding based on frame skipping is adopted for achieving efficient video compression. For better prediction of skipped frame at receiver, a frame approximation technique (FAT) algorithm is proposed. Simulation results reveal that CMVC-FAT framework outperforms the existing method with achievement of 86.5% reduction in time and bits. Also, it shows 83.75% reduction in transmission energy compared with raw frame.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call