Abstract
Data biases such as class imbalance and label noise always exist in large-scale datasets in real-world. These problems bring huge challenges to deep learning methods. Some previous works adopted loss re-weighting, sample re-weighting, or data-dependent regularization to mitigate the influence of these training biases. But these methods usually pay more attention to class imbalance problem when both the class imbalance and label noise exist in training set simultaneously. These methods may overfit noisy labels, which leads to a great degradation in performance. In this paper, we propose a gradient-aware learning method for the combination of the two biases. During the training process, we update only a part of crucial parameters regularly and rectify the update direction of the rest redundant parameters. This update rule is conducted both in the encoder and classifier of the deep network to decouple label noise and class imbalance implicitly. The experimental results verify the effectiveness of the proposed method on synthetic and real-world data biases.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.