Abstract
Facial expression analysis on 3D data has the potential to avoid many of the difficulties heir to 2D data, such as lighting variations and non-frontal pose. In particular, analysis of 3D point cloud data (as opposed to depth maps) offers the potential for higher-resolution, pose-invariant features. Because neural networks and deep learning have proven to be very powerful tools for a wide variety of tasks in recent history, one would naturally wish to apply deep learning for expression analysis of 3D point data. However, the overwhelming majority of these methods target 2D image data, and there are only a few works that utilize 3D point data directly in a neural network for any purpose. That said, the results of these works show improvement over using other forms of data. Therefore, in this work, we experiment with recent successful architectures and propose a new architecture, Local Continuous PointNet (LCPN), for unordered 3D point cloud analysis to detect Action Units (AUs) in the BP4D-Spontaneous database. We also perform cross-database experiments on subjects from the BP4D+ database. To the best of the authors’ knowledge, this is the first work that directly processes unordered 3D point clouds in a neural network for facial expression analysis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.