With the development of Metaverse technology, the avatar in Metaverse has faced serious security and privacy concerns. Analyzing facial features to distinguish between genuine and manipulated facial videos holds significant research importance for ensuring the authenticity of characters in the virtual world and for mitigating discrimination, as well as preventing malicious use of facial data. To address this issue, the Facial Feature Points and Ch-Transformer (FFP-ChT) deepfake video detection model is designed based on the clues of different facial feature points distribution in real and fake videos and different displacement distances of real and fake facial feature points between frames. The face video input is first detected by the BlazeFace model, and the face detection results are fed into the FaceMesh model to extract 468 facial feature points. Then the Lucas-Kanade (LK) optical flow method is used to track the points of the face, the face calibration algorithm is introduced to re-calibrate the facial feature points, and the jitter displacement is calculated by tracking the facial feature points between frames. Finally, the Class-head (Ch) is designed in the transformer, and the facial feature points and facial feature point displacement are jointly classified through the Ch-Transformer model. In this way, the designed Ch-Transformer classifier is able to accurately and effectively identify deepfake videos. Experiments on open datasets clearly demonstrate the effectiveness and generalization capabilities of our approach.
Read full abstract