Abstract

Feature representations generated through triplet-based deep metric learning offer significant advantages for facial expression recognition (FER). Each threshold in triplet loss inherently shapes a distinct distribution of inter-class variations, leading to unique representations of expression features. Nonetheless, pinpointing the optimal threshold for triplet loss presents a formidable challenge, as the ideal threshold varies not only across different datasets but also among classes within the same dataset. In this paper, we propose a novel multi-threshold deep metric learning approach that bypasses the complex process of threshold validation and markedly improves the effectiveness in creating expression feature representations. Instead of choosing a single optimal threshold from a valid range, we comprehensively sample thresholds throughout this range, which ensures that the representation characteristics exhibited by the thresholds within this spectrum are fully captured and utilized for enhancing FER. Specifically, we segment the embedding layer of the deep metric learning network into multiple slices, with each slice representing a specific threshold sample. We subsequently train these embedding slices in an end-to-end fashion, applying triplet loss at its associated threshold to each slice, which results in a collection of unique expression features corresponding to each embedding slice. Moreover, we identify the issue that the traditional triplet loss may struggle to converge when employing the widely-used Batch Hard strategy for mining informative triplets, and introduce a novel loss termed dual triplet loss to address it. Extensive evaluations demonstrate the superior performance of the proposed approach on both posed and spontaneous facial expression datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.