The paper presents a series of three new video quality model standards for the assessment of sequences of up to UHD/4K resolution. They were developed in a competition within the International Telecommunication Union (ITU-T), Study Group 12, in collaboration with the Video Quality Experts Group (VQEG), over a period of more than two years. A large video quality test set with a total of 26 individual databases was created, with 13 used for training and 13 for validation and selection of the winning models. For each database, video quality laboratory tests were run with at least 24 subjects each. The 5-point Absolute Category Rating (ACR) scale was used for rating, calculating Mean Opinion Scores (MOS) as ground-truth. To represent today's commonly applied HTTP-based adaptive streaming context, the test sequences comprise a variety of encoding settings, bitrates, resolutions and framerates for the three codecs H.264/AVC, H.265/HEVC and VP9, applied to a wide range of source sequences of around 8 s duration. Processing was carried out with an FFmpeg-based processing chain developed specifically for the competition, and via upload and encoding through exemplary online streaming services. The resulting data represents the largest, lab-test-based dataset used for video quality model development to date, with a total of around 5,000 test sequences. The paper addresses the three models ultimately standardized in the P.1204 Recommendation series, resulting in different model types and for different applications: (i) Rec. P.1204.3, no-reference bitstream-based, with access to encoded bitstream information; (ii) P.1204.4, pixel-based, using information from the reference and the processed video, and (iii) P.1204.5, no-reference hybrid, using both bitstream-and pixel-information without knowledge of the reference. The paper outlines the development process and provides holistic details about the statistical evaluation, test databases, model algorithms and validation results, as well as a performance comparison with state-of-the-art models.
Read full abstract