Abstract
The idea is to identify the character being written by a user on a piece of paper by observing the acceleration of the pen while writing these characters and passing this acceleration data to an Artificial Intelligence (AI) algorithm. Using an off-the-shelf accelerometer paired with a microcontroller using Inter-Integrated Circuit (i2c) protocol, we get sequential acceleration values in three directions. Upon observing this with this data, we noticed the same patterns when writing a specific character and decided to use efficient Long Short-Term Memory (LSTM) cells to recognize the patterns. After collecting some data for training the neural network and doing some preprocessing of the data, we used Basic LSTM cells in the models to recognize the patterns from the sequences of these values. LSTM cells were preferred over regular Recurrent Neural Network cells (RNN) due to LSTMs ability to remember longer sequences. Multiple LSTM models with a different number of layers and sizes with different activation functions and dropout values were trained and tested for performance and we were able to achieve a test accuracy of 47% on a fairly small dataset which far exceeds the 10% accuracy benchmark which would have been simple guesswork. With some more optimization of the hyperparameters of the neural network and training with a larger dataset, we believe better performance can be achieved. For the purpose of this paper, we have used numerical digits (0–9) as the characters to be classified.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.