Abstract

Unmanned Aerial Systems (UASs), or drones, continue to increase in capabilities and sophistication across a wide range of applications. UASs have high mobility, are easily deployed, and capable of real-time monitoring of crowd behavior by utilizing multi-sensor-based detection and remote sensing of objects. These capabilities make UASs a very useful tool for Human Autonomy Teaming (HAT) applications, such as Law Enforcement (LE), capitalizing on Human Factors (HF). This study examines the concept of leveraging drone technology together with Artificial Intelligence (AI) and Machine Learning (ML) methods to produce a UAS system that can assist LE in the monitoring and assessment of crowd behaviors during peaceful and non-peaceful events. LE agencies are increasingly being tasked with engaging in dynamic environments that exist at public events. Utilized as a force multiplier and autonomous tool, would benefit from an AI-UAS platform utilizing artificial intelligence assisting in identifying behavior of peaceful people as opposed to malevolent participants or instigators that may attempt to take control. AI-UASs of this type would allow LE to leverage existing resources within their organizational structures and provide increased situation awareness via a Live Virtual Constructive (LVC) broadcast and monitoring of these dynamic environments. Information provided from these AI-UAS systems would provide real-time information to field forces as well as command and control operations that may be remotely located. AI-UAS Sensors can be dynamically allocated as needed for monitoring/documenting crowd behavior and police actions. Video recordings would provide evidence in court as well counter truth-bending recordings published by professional protestors and agenda driven main-stream media outlets. The benefits and impact of this type of LE AI-UAS platform would be profound. Traditional visible light based sensors can be greatly influenced by environmental factors preventing their ability to determine variations regarding abnormal crowd behaviors. In order to overcome this challenge, this project proposes to utilize four types of collection methods, Multitask Cascading CNN (MC-CNN), ScatterNet Hybrid Deep Learning Network, multiscale infrared optical flow (MIR-OF), and Event Cameras such as Event-based Vision, and Event Camera SLAM (Simultaneous Localization and Mapping). AI methods will be developed to monitor crowd density, average ground speed, human pose estimations, and movement behaviors, as well as identification of primary violent instigators. This proposed system will detect violent individuals in real-time by leveraging onboard image processing as well as cloud processing. Fundamental research for this project is inspired and built upon recent Drone Surveillance System (DSS) publications from IEEE and MDPI.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.