The aim of this concept paper is the description of a new tool to support institutions in the implementation of targeted countermeasures, based on quantitative and multi-scale elements, for the fight and prevention of emergencies, such as the current COVID-19 pandemic. The tool is a cloud-based centralized system; a multi-user platform that relies on artificial intelligence (AI) algorithms for the processing of heterogeneous data, which can produce as an output the level of risk. The model includes a specific neural network which is first trained to learn the correlations between selected inputs, related to the case of interest: environmental variables (chemical–physical, such as meteorological), human activity (such as traffic and crowding), level of pollution (in particular the concentration of particulate matter) and epidemiological variables related to the evolution of the contagion. The tool realized in the first phase of the project will serve later both as a decision support system (DSS) with predictive capacity, when fed by the actual measured data, and as a simulation bench performing the tuning of certain input values, to identify which of them led to a decrease in the degree of risk. In this way, we aimed to design different scenarios to compare different restrictive strategies and the actual expected benefits, to adopt measures sized to the actual needs, adapted to the specific areas of analysis and useful for safeguarding human health; and we compared the economic and social impacts of the choices. Although ours is a concept paper, some preliminary analyses have been shown, and two different case studies are presented, whose results have highlighted a correlation between NO2, mobility and COVID-19 data. However, given the complexity of the virus diffusion mechanism, linked to air pollutants but also to many other factors, these preliminary studies confirmed the need, on the one hand, to carry out more in-depth analyses, and on the other, to use AI algorithms to capture the hidden relationships among the huge amounts of data to process.
Read full abstract