Abstract

Cooperative autonomous systems, such as swarms, multi-camera systems, or fleets of self-driving vehicles, can better understand a given scene by sharing multiple modalities and varying viewpoints. This superposition of data adds robustness and redundancy to a system typically burdened with obstructions and unrecognizable, distant objects. Collaborative perception is a key component of cooperative autonomous systems where modalities can include camera sensors, LiDAR, RADAR, and depth images. Meanwhile, the amount of useful information that can be shared between agents in a cooperative system is constrained by current communication technologies (e.g. bandwidth limitations). Recent developments in learned compression can enable the training of end-to-end cooperative systems using deep learning with compressed communication in the pipeline. We explore the use of a deep learning object detector in a cooperative setting with a learned compression model facilitating communication between agents. To test our algorithm, this research will focus on object detection in the image domain as a proxy for one of the modalities used by collaborative systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call