We evaluate the degradation in a video sequence that suffers from increasing packet loss ratio (PLR), by using several kinds of quality measures, namely, mean square error (MSE); A max (the root MSE, which is weighted by the maximum signal change between pairs of pixels—the current pixel and eight surrounding neighbors); the video perceptual distortion measure (VPDM); and a perceptual distortion metric for digital color video (VPDM2). Packet losses can occur at different positions in the compressed video sequence, for example, in headers, I frames, P frames, and B frames. Each packet loss results in a different amount of distortion in the received video signal. We demonstrate the differences between two packet loss models—IID (identical independent distribution) and burst (a sequence of packets lost together). We also examine the influence of packet loss in a communication network on MPEG2 compressed digital video quality. Three of these measures emulate features of the human visual system: the VPDM, which considers temporal masking of distortion (the human eye notices less degradation within a frame, if it appears during fast changes between consequent frames, such as scene cuts); the A max , which encompasses spatial masking, resulting in small objects within high-frequency neighborhoods being less noticeable; and the VPDM2, which takes into account the human vision color channels and temporal and spatial masking. The results point out that loss due to burst degrades the quality of the compressed video stream significantly less than loss due to IID. The video quality can still be tolerable for a 0.1 PLR using the burst loss model, but for the IID loss model, even for lower PLRs (i.e., 0.01), the video is not perceptible to the human viewer.
Read full abstract