Abstract
Distributed cameras have been used widely for real-time image recognition. There are two main approaches in distributed camera systems: (1) The cameras are equipped with a powerful high-end processor for local image processing, (2) Low-cost cameras with resource-constrained processors are used for capturing the images and transferring them to a cloud server for classification purposes. The first approach is costly and not scalable. The second approach is too slow for real-time object detection due to the transfer delays to a remote server. These problems exacerbate in multi-view image recognition, where a central platform is required for collective image processing of multiple images from the same scene. For this purpose, typically, a cloud server is used, which does not meet real-time recognition, network bandwidth, scalability and power consumption constraints of such systems. This paper proposes hierarchical neural network structures that can be realized in an edge orchestration architecture at different levels, i.e., cameras, edge devices, and cloud servers to enable deep learning capabilities for real-time multi-view image recognition. This would enable us to detect objects in the proximity of multiple cameras and transfer data for deeper layer processing to the cloud server and training of all connected edge devices and cameras.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.