Abstract

Distributed cameras have been used widely for real-time image recognition. There are two main approaches in distributed camera systems: 1. The cameras are equipped with a powerful high-end processor for local image processing, 2. Low-cost cameras with resource-constrained processors are used for capturing the images and transferring them to a cloud server for classification purposes. The first approach is costly and not scalable. The second approach is too slow for real-time object detection due to the transfer delays to a remote server. These problems exacerbate in multi-view image recognition, where a central platform is required for collective image processing of multiple images from the same scene. So, there is a need for a scalable intelligent engine that can adapt itself based on the communication and processing delays and energy level of cameras (if battery operated) to accomplish real-time object recognition. This paper proposes to move from the traditional cognitive services based on training models in the cloud and requesting inference locally at cameras to a hierarchical bandwidth-efficient machine learning structure spread across cameras, edge devices and the cloud server for handling high density data streams.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.