Abstract

Transmission of video data from multiple sensors over a wireless network requires enormous amount of bandwidth, and could easily overwhelm the system. However, by exploiting the redundancy between the video data collected by different cameras, in addition to the inherent temporal and spatial redundancy within each video sequence, the required bandwidth can be significantly reduced. Well-established video compression standards, such as MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264, all rely on efficient transform coding of motion-compensated frames, using the discrete cosine transform (DCT) or computationally efficient approximations to it. However, they can only be used in a protocol that encodes the data of each sensor independently. Such methods would exploit spatial and temporal redundancy within each video sequence, but would completely ignore the redundancy between the sequences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call