In Deep Learning (DL), an adversary creates Deepfakes by manipulating facial features to fool someone. The Deepfakes pose a security threat to anyone's privacy and a primary concern for our society. It can be detected by utilizing the texture and physiological properties of the face, like eye and lip movements; however, such methods are incompetent when Deepfakes are created using recent generative adversarial networks (GAN). Alternatively, remote Photoplethysmography (rPPG) information can be used for Deepfake detection because GANs neglect human physiological information for Deepfake generation. Such detection can be inaccurate when rPPG signals are affected by the noises induced by facial deformation and illumination variations. Furthermore, the exiting Deepfake detections are usually performed using sequential models, and such models fail to process the long sequence of temporal information. These issues are mitigated by our proposed method AVENUE , that is, \(A\) no \(V\) el d \(E\) epfake detectio \(N\) method based on temporal convol \(U\) tion n \(E\) twork & rPPG information. For mitigating the noise issues in the rPPG signals, the proposed method detects and employs relatively stable clips of the input video for Deepfake detection. The stable clips are those clips that are least affected by facial deformations. Also, we use a modified Temporal convolution network to model the long sequence of Deepfake information rather than the sequential architectures. We performed the experimental result on publically available datasets of Deepfake videos. It demonstrates that our proposed method performs better than the existing rPPG-based Deepfake detection methods.