Abstract

Modeling visual quality is a challenging problem which closely relates to many factors of the human perception. Subjectively-rated visual quality databases facilitate the parametric modeling methods. However, a single database provides only sparse and insufficient samples in comparison with the huge space of visual signals. Fortunately, co-training on multiple databases may protect a robust visual quality metric from over-fitting. We propose Additive Log-Logistic Model (ALM) to formulate visual quality and maximum likelihood (ML) regression to co-train ALM on multiple databases. As an additive linear model, ALM has flexible monotonic or nonmonotonic partial derivatives and thus can capture various impairments with respect to full-reference and/or no-reference features. Benefitting from the ALM-ML framework, we have developed 1) a no-reference video quality metric, which wins the cross validation by ITU-T SG 12 (Study Group 12 of Telecommunication Standardization Sector of Inter-national Telecommunication Union) and adopted as Standard ITU-T P.1202.2 Mode 2, and 2) a full-reference image quality metric, which achieves steady accuracy on 11 databases and provides plausible explanations in visual physiology.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.