Abstract
Deep learning has become increasingly popular and widely applied to computer vision systems. Over the years, researchers have developed various deep learning architectures to solve different kinds of problems. However, these networks are power-hungry and require high-performance computing (i.e., GPU, TPU, etc.) to run appropriately. Moving computation to the cloud may result in traffic, latency, and privacy issues. Edge computing can solve these challenges by moving the computing closer to the edge where the data is generated. One major challenge is to fit the high resource demands of deep learning in less powerful edge computing devices. In this research, we present an implementation of an embedded facial recognition system on a low cost Raspberry Pi, which is based on the FaceNet architecture. For this implementation it was required the development of a library in C++, which allows the deployment of the inference of the Neural Network Architecture. The system had an accuracy and precision of 77.38% and 81.25%, respectively. The time of execution of the program is 11 seconds and it consumes 46 [kB] of RAM. The resulting system could be utilized as a stand-alone access control system. The implemented model and library are released at https://github.com/cristianMiranda-Oro/FaceNet_EmbeddedSystem
Highlights
A person’s face contains physical information that can be used for security and access control applications
The solution proposed in [2] achieved a precision of 99.63 % in the Labeled Faces in the Wild (LFW) [3] dataset using a deep learning system called FaceNet with almost 7.5M parameters
This architecture learns a mapping of facial images to a compact Euclidean space where the distances correspond directly to a measure of facial similarity. Another deep learning solution is [4] that attains an accuracy of 99.52% in the same LFW dataset using a VGGNet-16 neural network architecture, with 138M parameters
Summary
A person’s face contains physical information that can be used for security and access control applications. The solution proposed in [2] achieved a precision of 99.63 % in the Labeled Faces in the Wild (LFW) [3] dataset using a deep learning system called FaceNet with almost 7.5M parameters This architecture learns a mapping of facial images to a compact Euclidean space where the distances correspond directly to a measure of facial similarity. The authors in [5] propose a new loss function called Additive Angular Margin Loss (ArcFace), which incorporates margins in a well-established loss function to maximize face class separability They use the ResNet100 neural network with 65M parameters and obtain an accuracy of 99.83% in the LFW dataset
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.