Abstract

The irrelevant background information in the personalized image is easy to be quantified into the same word as the main target, and the quantization process will inevitably cause the loss of a lot of visual information. This phenomenon will seriously reduce the quality of the generated theme when the personalized image content is complex. This paper proposes a Multi-Source Big Data Fusion Annotation (MSBDFA) model. The model obtains similar personalized images by analyzing the relevant multi-source information of the personalized images, and uses the annotations of the similar personalized images to label the personalized images. For the personalized images with complex background visual information, the personalized image retrieval based on complete information modeling uses the high-dimensional Gaussian distribution to directly model the continuous visual features of the personalized images, and uses the two-level spectral clustering algorithm to distribute the regional topics, so as to embed the complete local information contained in the visual features into the global features of the personalized image. Therefore, this method can completely retain visual information during the modeling process, so that the targets buried in the complex background can be better classified. The experimental results on the standard database show that the method proposed in this paper can generate high-quality personalized image subjects in complex scenes and has good retrieval performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call