Abstract

To safely navigate and avoid obstacles in a complex dynamic environment, autonomous drones need a reaction time less than 10 milliseconds. Thus, event-based cameras have increasingly become more widespread in the academic research field for dynamic obstacles detection and avoidance for UAV, as their achievements outperform their frame-based counterparts in term of low-latency. Several publications showed significant results using these sensors. However, most of the experiments relied on indoor data. After a short introduction explaining the differences and features of an event-based camera compared to traditional RGB camera, this work explores the limits of the state-of-art event-based algorithms for obstacles recognition and detection by expanding their results from indoor experiments to real-world outdoor experiments. Indeed, this paper shows the inaccuracy of event-based algorithms for recognition due to insufficient amount of events generated and the inefficiency of event-based obstacles detection algorithms due to the high ration of noise.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.