Abstract

Computer vision research has been used in daily applications, such as art, social media app filter, and face recognition. This emergence is because of the usage of the deep learning method in the computer vision domain. Deep learning research has improved many qualities of services for various applications. Starting from recommended until detection systems are now relying on deep learning models. However, currently many models require high computational processing and storage space. Implementing such an extensive network with limited resources on an embedded device or smartphone becomes more challenging. In this study, we focus on developing a model with small computational resources with high accuracy using the knowledge distillation method. We evaluate our model on the public and private datasets of receipt and non-receipt images that we gathered from Badan Pendapatan Daerah, CORD, and Kaggle dataset. After that, we compare it with the regular convolutional neural network (CNN) and pre-trained model. We discovered that knowledge distillation only uses 12% and 5% of the total weight of the CNN and the pre-trained model, respectively. As a result, we see a possibility that knowledge distillation illustrates potential outcomes as a method that could implement for automatic receipt identification in the Jakarta Super App.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call