Abstract

Photoplethysmography (PPG), as one of the most widely used physiological signals on wearable devices, with dominance for portability and accessibility, is an ideal carrier of biometric recognition for guaranteeing the security of sensitive information. However, the existing state-of-the-art methods are restricted to practical deployment since power-constrained and compute-insufficient for wearable devices. 1D convolutional neural networks (1D-CNNs) have succeeded in numerous applications on sequential signals. Still, they fall short in modeling long-range dependencies (LRD), which are extremely needed in high-security PPG-based biometric recognition. In view of these limitations, this paper conducts a comparative study of scalable end-to-end 1D-CNNs for capturing LRD and parameterizing authorized templates by enlarging the receptive fields via stacking convolution operations, non-local blocks, and attention mechanisms. Compared to a robust baseline model, seven scalable models have different impacts (−0.2%–9.9%) on the accuracy of recognition over three datasets. Experimental cases demonstrate clear-cut improvements. Scalable models achieve state-of-the-art performance with an accuracy of over 97% on VitalDB and with the best accuracy on BIDMC and PRRB datasets performing 99.5% and 99.3%, respectively. We also discuss the effects of capturing LRD in generated templates by visualizations with Gramian Angular Summation Field and Class Activation Map. This study conducts that the scalable 1D-CNNs offer a performance-excellent and complexity-feasible approach for biometric recognition using PPG.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call