Abstract

This research discusses the importance of enhancing real-time object detection on mobile devices by introducing a new multi-object detection system that uses the quantified YOLOv7 model. Focusing on the complexities of food item detection, particularly in diverse and intricate contexts, our study uses a dataset that includes five food classes. By investigating the influence of data quantity on the detection model, we demonstrate the superiority of larger datasets in both YOLOv5 and YOLOv7. In addition, our comparison shows that YOLOv7 has better precision, recall, and F1-score values compared to YOLOv5. The crucial methodological contribution lies in the successful quantification of the YOLOv7 model, reducing the model size from 28.6 KB to 14.3 KB and enabling seamless mobile application development. This high-performance mobile application displays a real-time interface response time of 235ms, with precision, recall, and F1-score values of 0.923, 0.9, and 0.911, respectively. Beyond the practical implications for informed dietary choices and improved health outcomes, our study develops object detection techniques theoretically, offering valuable insights that can be applied across various domains and emphasizing the potential impact of our approach on both theory and practice

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call