Abstract
In this paper, a Bayesian method for fusing multiple visual human crowd detections (in the form of heatmaps) under an autonomous UAV fleet deployment setting is proposed, aiming at enhanced vision-assisted human crowd avoidance in line with common UAV safety regulations. 2D crowd heatmaps are derived using deep neural human crowd detectors on multiple UAV camera streams covering the same large-scale area over time (e.g., when each drone tracks a different target). Then, these heatmaps are back-projected onto the 3D terrain of the navigation environment. The projected crowd heatmaps are fused by exploiting a Bayesian filtering approach that favors newer crowd observations over older ones. Thus, during flight, an area is marked as crowded (therefore, a no-fly zone) if all, or most, UAV-mounted visual detectors have recently and confidently indicated crowd existence on it. Empirical evaluation on synthetic multiview video sequences depicting human crowds in outdoor environments verifies the efficiency of the proposed method against the no-fusion case.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have