Developing accurate Just-noticeable difference (JND) models are challenged by complicated HVS characteristics and nonstationary features of video sequence. Great efforts have been devoted to JND modeling, and inspiring performance improvements are witnessed in the literature, especially spatial JND models. However, there are not only urgent requirement but also technical potentiality for improving temporal JND models fully accounting for the temporal perception characteristics. In terms of temporal JND modeling, there are two challenges, one is how to extract perceptual feature parameters of source video, and the other is how to quantitatively characterize the interaction relationship between feature parameters and HVS characteristics? Firstly, this work extracts content-aware temporal feature parameters having predominate impacts on vision perception, including motion (foreground/background), pixel-correspondence duration and inter-frame residue fluctuation intensity along temporal trajectory, and investigates the HVS responses to these four heterogeneous feature parameters. Secondly, this work proposes respective probability density functions (PDF) in the perception sense to quantitatively depict the attention and suppression perception responses of feature parameters, accounting for the temporal perception characteristics. Using these PDF models, we fuse the heterogeneous feature parameters from the viewpoint of uniform dimension,i.e. self-information measured visual attention and information entropy measured masking uncertainty, achieving heterogeneous parameter homogenization. Thirdly, with self-information and entropy results, this work then proposes a temporal weight model, by striking the balance between visual attention and masking suppression, to adjust the spatial JND threshold, and then develops the improved spatiotemporal JND model. Intensive simulation results verity the effectiveness of the proposed spatiotemporal JND profile, with competitive model accuracy compared with the-state-of-the-art candidate models.
Read full abstract