Abstract

An algorithm is presented for mobile vision-based localization of skewed nutrition labels (NLs) on grocery packages that maximizes specificity, i.e., the percentage of true negative matches out of all possible negative matches. The algorithm works on frames captured from the smartphone camera’s video stream and localizes NLs skewed up to 35-40° in either direction from the vertical axis of the captured frame. The algorithm uses three image processing methods: edge detection, line detection, and corner detection. The algorithm targets medium- to high-end mobile devices with single or quad-core ARM systems. Since cameras on these devices capture several frames per second, the algorithm is designed to minimize false positives rather than maximize true ones, because, at such frequent frame capture rates, it is far more important for the overall performance to minimize the processing time per frame. The algorithm is implemented on the Google Nexus 7 Android 4.3 smartphone. Evaluation was done on 378 frames, of which 266 contained NLs and 112 did not. The algorithm’s performance, current limitations, and possible improvements are analyzed and discussed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.