Abstract

The present article introduces a novel method for processing and visualising information linked to fire events, from Twitter, which is utilised as a source of Volunteered Geographic Information (VGI). The fire of North East Attica (August 2017, Greece) was used as a case study. That fire resulted in the burn of 15,000 decares of woodland, while state of emergency was declared in the region and thousands of citizens were incited to leave from the area. A tweet corpus consisted of 74,292 tweets was processed, containing certain words and hashtags linked to the fire event and posted within a week since the fire occurrence. After about 24,000 classified and geo-referenced point observations were extracted, data processing was performed with the use of machine learning and of an innovative script, developed in R, for geo-referencing social media data, related to disaster management. The classification structure applied consisted of four main categories: (I) tweets related to simple fire event detection, (II) tweets related to crisis management, (III) tweets related to consequences of the fire and (IV) tweets related to the tracking of the fire event.; further sub-classifications were performed to quantify the consequences of the fire, and to indicate the precision of the geo-referenced contents. The final output consisted of scatter plots that analyse the frequency of the extracted content, along with maps that visualised the processed information. Both graphs and maps utilised certain subsets of the data, posted through Twitter in certain time periods. Author visualization approaches considered the need for having maps that can make sense to all disaster management (DM) stakeholders: from decision makers up to simple citizens and volunteers that want to protect human lives and property. The results of the current research provide steps towards efficient automation and give an initial credible overview to the readers about the generated graphs and maps, at a glance. Finally, the author’s methodological framework, at the current stage, drastically reduces the time needed for data processing to less than 2 h.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.