In-air signatures are promising applications that have been investigated extensively in the past decades; an in-air signature involves gathering datasets through portable devices, such as smartwatches. During the signing process, individuals wear smartwatches on their wrists and sign their names in the air. The dataset we used in this study collected in-air signatures from 22 participants, resulting in a total of 440 smartwatch in-air signature signals. The dynamic time warping (DTW) algorithm was applied to verify the usability of the dataset. This paper analyzes and compares the performances of multiple convolutional neural networks (CNN) and the transformer using median-sized smartwatch in-air signatures. For the four CNN models, the in-air digital signature data were first transformed into visible three-dimensional static signatures. For the transformer, the nine-dimensional in-air signature signals were concatenated and downsampled to the desired length and then fed into the transformer for time sequence signal multi-classification. The performance of each model on the smartwatch in-air signature dataset was thoroughly tested with respect to 10 optimizers and different learning rates. The best testing performance score in our experiment was 99.8514% with ResNet by using the Adagrad optimizer under a 1×10−4 learning rate.
Read full abstract