Abstract

Image clustering is one of the most significant problems in computer vision and data mining. To mitigate the influence brought by appearance variation, many scholars attempt to cluster images with multiple features, a.k.a, multi-view image clustering. However, the majority of existing methods simply consider context information across images or content information within images, failing to combine both sides. We in this paper propose a novel auto-weighted multi-view content-context information bottleneck (AMC2IB) method for image clustering. The AMC2IB method can simultaneously utilize the content and context information for partitioning images. The “content” characterizes the intrinsic information within each image, e.g., appearance feature of color or shape; The “context” describes the close relationships among images of each view, e.g., inter-image similarity. The mechanism of maximum entropy is also introduced to automatically learn the view weight, and thus the importance of different views can be integrated for effective clustering. Additionally, to remove the extra weight regularization parameter in AMC2IB, we further propose an auto-weighted multi-view content-context information bottleneck without weight regularization (AMC2IBW) method. Afterwards, the above problems can be formulated as information loss functions by maintaining the context and content information maximally when the input images are compressed. Eventually, a new alternating iterative method is designed for the optimization of both proposed objective functions. Experimental results on five real-world multi-view image datasets show the superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call