Abstract
Huge datasets are important to build powerful pipelines and ground well to new images. In Computer Vision, the most basic problem is image classification. The classification of images may be a tedious job, especially when there are a lot of amounts. But CNN is known to be data-hungry while gathering. How can we build some models without much data? For example, in the case of Sign Language Recognition (SLR). One type of Sign Language Recognition system is vision-based. In Indonesian Sign Language dataset has a relatively small sample image. This research aims to classify sign language images using Computer Vision for Sign Language Recognition systems. We used a small dataset, Indonesian Sign Language. Our dataset is listed in 26 classes of alphabet, A-Z. It has loaded 12 images for each class. The methodology in this research is few-shot learning. Based on our experiment, the best accuracy for few-shot learning is Mnasnet1_0 (85.75%) convolutional network model for Matching Networks, and loss estimation is about 0,43. And the experiment indicates that the accuracy will be increased by increasing the number of shots. We can inform you that this model's matching network framework is unsuitable for the Inception V3 model because the kernel size cannot be greater than the actual input size. We can choose the best algorithm based on this research for the Indonesian Sign Language application we will develop further.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: JOIV : International Journal on Informatics Visualization
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.