People with visual impairments often face difficulties in determining the authenticity of paper money, which is a crucial skill to avoid fraud. The limitations of traditional methods, like blind codes for visually impaired people, require a more advanced and efficient solution. Previous methods of currency detection using Convolutional Neural Network (CNN) techniques, including the VGG-19 architecture, have often encountered challenges, particularly the long training times required. Therefore, we propose using transfer learning techniques and modifying the top layers of the VGG-19 model, known as fully connected layers, within a mobile application with audio feedback built using Android Studio. These modifications involve substituting the three fully connected layers with dense and flattened layers. We also implemented hyperparameter tuning, including adjusting the batch sizes and setting the number of epochs. The datasets used Indonesian Rupiah paper currency from the 2022 emission year, specifically Rp 50,000 and Rp 100,000 denominations. The best transfer learning VGG-19 model achieved a batch size of 32 and an epoch of 50, resulting in a high accuracy of 88%. Response speed testing with performance profiling on Android Studio showed an overall average response time of 458 ms. The main advantage of using transfer learning with the VGG-19 model is that it significantly reduces training time while still achieving high accuracy, differentiating this work from previous studies that relied on training from scratch, which is more time-consuming and resource-intensive. Therefore, this mobile app can be categorized as having a fast response time.
Read full abstract