Abstract

Video compression has great significance in the communication of motion pictures. Video compression techniques try to remove the different types of redundancy within or between video sequences. In the temporal domain, the video compression techniques remove the redundancies between the highly correlated consequence frames of the video. In the spatial domain, the video compression techniques remove the redundancies between the highly correlated consequence pixels (samples) in the same frame. Evolving neural-networks based video coding research efforts are focused on improving existing video codecs by performing better predictions that are incorporated within the same codec framework or holistic methods of end-to-end video compression schemes. Current neural network-based video compression adapts static codebook to achieve compression that leads to learning inability from new samples. This paper proposes a modified video compression model that adapts the genetic algorithm to build an optimal codebook for adaptive vector quantization that is used as an activation function inside the neural network’s hidden layer. Background subtraction algorithm is employed to extract motion objects within frames to generate the context-based initial codebook. Furthermore, Differential Pulse Code Modulation (DPCM) is utilized for lossless compression of significant wavelet coefficients; whereas low energy coefficients are lossy compressed using Learning Vector Quantization (LVQ) neural networks. Finally, Run Length Encoding (RLE) is engaged to encode the quantized coefficients to achieve a higher compression ratio. Experiments have proven the system’s ability to achieve higher compression ratio with acceptable efficiency measured by PSNR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call