Abstract

In order to solve the problems of video saliency detection and poor fusion effect, a video saliency detection model and a fusion model are proposed. Video saliency detection is divided into spatial saliency detection and temporal saliency detection. In the spatial domain, inspired by the properties of visual cortex hierarchical perception and the Gestalt visual psychology, we propose a hierarchical saliency detection model with three-layer architecture for single frame image. The video single frame is simplified layer by layer, then the results are combined to form a whole consciousness vision object and become easier to deal with. At the bottom of the model, candidate saliency regions are formed by nonlinear simplification model of the characteristic image (dual color characteristic and luminance characteristic image), which is in accordance with the biological visual characteristic. In the middle of the model, the candidate regions with the strongest competitiveness are selected as the local salient regions according to the property of matrix minimum Fresenius- norm (F- norm). At the top level of the model, the local salient regions are integrated by the core theory of Gestalt visual psychology, and the spatial saliency map is obtained. In the time domain, based on the consistency assumption of a moving object in target location, motion range and direction, the optical flow points detected by Lucas-Kanade method are classified to eliminate the noise interference, then the motion saliency of moving object is measured by the motion amplitude. Finally, based on the difference between the visual sensitivity of dynamic and static information and the difference in visual sensitivity between color information and gray information, a general fusion model of time and spatial domain salient region is proposed. The saliency detection results of single frame image and video sequence frame image are represented by the gray color model and the Munsell color system respectively. Experimental results show that the proposed saliency detection method can suppress the background noise, solve the sparse pixels problem of a moving object, and can effectively detect the salient regions from the video. The proposed fusion model can display two kinds of saliency results simultaneously in a single picture of a complex scene. This model ensures that the detection results of images are so complicated that a chaotic situation will not appear.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.