Abstract

This paper describes the approach to active liveness detection of the face using facial features and movements. The project aims to create a better method for detecting liveness in real-time on an application programming interface (API) server. The project is built using Python programming with the computer vision libraries OpenCV, dlib and MediaPipe and the deep learning library Tensorflow. There are five modules in active liveness detection progress related to different parts or movements on the face: headshakes, nodding, eye blinks, smiles, and mouths. The functionality of modules runs through face landmarking through dlib and MediaPipe and detection of face features through Tensorflow Convolutional Neural Network (CNN) trained in two different approaches: smile detection and eye-blink detection. The result of implementing face landmarking shows an accurate result through the pre-trained model of MediaPipe and the pre-trained parameter of the dlib 68 landmarking model. And more than 90% classification model accuracy in precision, recall, and f1-score for both trained CNNs in detecting smiles and eyes blinking through the Scikit-Learn classification report. In addition, the prototype API is also implemented using the Python RESTful API library, FastAPI, to test the detection functionality in the prototype Android application. The prototype result is outstanding, as the model excellently requests and retrieves from the API server. The possible research path gives the success of real-time detection on API servers for easy implementation of liveness detection on low-spec client devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call