Abstract

Classification of disasters is crucial for effective disaster management and response. This paper proposes a methodology that combines computer vision techniques and federated learning to improve the classification accuracy of disasters while addressing the issue of data transfer and the time squandered doing so. This methodology employs computer vision algorithms to analyze captured visual data from a variety of sources. It seeks to accurately classify disasters such as wildfires, floods, earthquakes, and cyclones by extracting pertinent features and patterns from these images. Using federated learning to resolve the issues of data privacy and transfer latency is the proposed solution. Federated learning makes it possible to train models on decentralized data sources without requiring data centralization. Each participating device or data source trains a local model using its own data, and only model updates are shared and aggregated to create a global model. Extensive experiments utilizing videos of actual disasters are conducted to evaluate the proposed methodology. The evaluation focuses on precision and effectiveness. This strategy is anticipated to result in improved disaster classification models, making them appropriate for deployment in disaster management systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call