Abstract

In comparison with support vector data description (SVDD), deep SVDD (DSVDD) is more suitable for dealing with large-scale data sets. DSVDD uses mapping network to replace the role of kernel mapping in SVDD. Moreover, the objective of DSVDD is to simultaneously learn the optimal connection weights of mapping network and the minimum volume of hypersphere. To further improve the performance of DSVDD for tackling large-scale data sets and obtain the discriminative features of the given samples in a self-supervised learning manner, contrastive DSVDD (CDSVDD) is proposed in this study. In the pre-training phase of CDSVDD, the contrastive loss and the rotation prediction loss are jointly minimized to achieve the optimal feature representations. Furthermore, the learned feature representations are utilized to determine the hypersphere center. In the training phase of CDSVDD, the distances between the obtained feature representations and the hypersphere center together with the contrastive loss are simultaneously minimized to derive the optimal network connection weights, the minimum volume of hypersphere and the optimal feature representations. In addition, CDSVDD can efficiently solve the hypersphere collapse problem of DSVDD. The ablation study on CDSVDD verifies that compared with the case of determining the hypersphere center by the feature representations of the original samples, the hypersphere center determined by the feature representations of the augmented samples makes CDSVDD achieve better hypersphere boundary and more compact feature representations. Experimental results on the four benchmark data sets demonstrate that the proposed CDSVDD acquires better detection performance in comparison with its six pertinent methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.