Abstract

Recently, blind image quality assessment (BIQA) models based on deep neural networks (DNNs) have achieved impressive performance on existing datasets. However, due to the intrinsic imbalance property of the training set, not all distortions or images are handled equally well. Online hard example mining (OHEM) is a promising way to alleviate this issue. Inspired by the recent finding that network pruning disproportionately hampers the model's memorization of a tractable subset, atypical, low-quality, long-tailed samples, that are hard-to-memorize during training and easily “forgotten” during pruning, we propose an effective “plug-and-play” OHEM pipeline, especially for generalizable deep BIQA. Specifically, we train two parallel weight-sharing branches simultaneously, where one is full model and other is a “self-competitor” generated from the full model online by network pruning. Then, we leverage the prediction disagreement between the full model and its pruned variant (i.e., the self-competitor) to expose easily “forgettable” samples, which are therefore regarded as the hard ones. We then enforce the prediction consistency between the full model and its pruned variant to implicitly put more focus on these hard samples, which benefits the full model to recover forgettable information introduced by pruning. Extensive experiments across multiple datasets and BIQA models demonstrate that the proposed OHEM can improve the model performance and generalizability as measured by correlation numbers and group maximum differentiation (gMAD) competition. Our code are available at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/wangzhihua520/IQA_with_OHEM</uri>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call