Abstract

This research aims to identify and recognize the OpenMV Camera H7. In this research, all tests were carried out using Deep Machine Learning and applied to several functions, including Face Recognition, Facial Expression Recognition, Detection and Calculation of the Number of Objects, and Object Depth Estimation. Face Expression Recognition was used in the Convolutional Neural Network to recognize five facial expressions: angry, happy, neutral, sad, and surprised. This allowed the use of a primary dataset with a 48MP resolution camera. Some scenarios are prepared to meet environment variability in the implementation, such as indoor and outdoor environments, with different lighting and distance. Most pre-trained models in each identification or recognition used mobileNetV2 since this model allows low computation cost and matches with low hardware specifications. The object detection and counting module compared two methods: the conventional Haar Cascade and the Deep Learning MobileNetV2 model. The training and validation process is not recommended to be carried out on OpenMV devices but on computers with high specifications. This research was trained and validated using selected primary and secondary data, with 1500 image data. The computing time required is around 5 minutes for ten epochs. On average, recognition results on OpenMV devices take around 0.3 - 2 seconds for each frame. The accuracy of the recognition results varies depending on the pre-trained model and the dataset used, but overall, the accuracy levels achieved tend to be very high, exceeding 96.6%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.