Abstract

The motivation of this work is to build a multimodal-based COVID-19 pandemic forecasting platform for a large-scale academic institution to minimize the impact of COVID-19 after resuming academic activities. The design of this multimodality work is steered by video, audio, and tweets. Before conducting COVID-19 prediction, we first trained diverse models, including traditional machine learning models (e.g., Naive Bayes, support vector machine, and TF-IDF) and deep learning models [e.g., long short-term memory (LSTM), MobileNetV2, and SSD], to extract meaningful information from video, audio, and tweets by 1) detecting and counting face masks, 2) detecting and counting cough for potential infected cases, and 3) conducting sentiment analysis based on COVID-19-related tweets. Finally, we fed the multimodal analysis results together with daily confirmed cases data and social distancing metrics into the LSTM model to predict the daily increase rate of confirmed cases for the next week. Important observations with supporting evidence are presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call