Abstract

Using a loss function with appropriate properties to supervise the training of the neural network can substantially improve its accuracy and speed. A majority of loss functions applied to landmark detection, including Smooth ℓ1 loss, Wing loss and Adaptive Wing loss, utilize a form of piece-wise function to enhance the loss value of small errors. Still, some issues arise with these functions, such as non-differentiable, sensitivity to the outliers and additional computation to guarantee the continuity of the functions. In this paper, we propose a unified loss function called Adaptive Robust loss (ARobust loss) that is tailored for two landmark detection paradigms, i.e., coordinate regression and heatmap regression. In addition, we demonstrate the robustness, smoothness and continuity of the proposed loss function towards the two paradigms analytically. Last but not least, we validate the preeminence and accuracy of the adaptive robust loss on several landmark detection datasets by experiments. The best NME(%) results under the HRNet-W18 reached 4.52 (WFLW-Test) and 3.34 (300W-Full), respectively. In the standard task of face alignment and human pose estimation, the performance of our approach is on par with the SOTA losses. And the advantage of our proposed loss is more obvious in some challenging scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.