Abstract
Models pre-trained on the ImageNet dataset are introduced to be exploited for knowledge transfer in numerous downstream computer vision tasks, including the weakly-supervised anomaly detection and segmentation area. Specifically, in anomaly segmentation, the former study shows that representing images with feature maps extracted by pre-trained models significantly improves over previous techniques. This kind of representation method requires both high-quality and task-specific features, but feature extractors obtained from ImageNet directly are very general. One intuition for obtaining stronger features is by transferring a pre-trained model to the target dataset. However, in this paper, we show that under weakly-supervised settings, naïve fine-tune techniques that typically work for supervised learning can lead to catastrophic feature space collapse and reduce performance greatly. Thus, we propose to apply a topology-preserving constraint during transferring. Our method preserves the topology graph to keep the feature space from collapsing under weakly-supervised settings. And then we combine the transferred model with a simple anomaly detection and segmentation baseline for performance evaluation. The experiments show that our method achieves competitive accuracy on several benchmarks meanwhile setting a new state-of-the-art for anomaly detection on CIFAR100/10 and BTAD datasets.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have