Abstract

Globally, the coronavirus epidemic has now hit lives of millions and thousands of people around the world. The growing threat of this virus continues rising as new cases appear every day. Yet, affected countries by coronavirus are currently taking important measures to remedy it by using artificial intelligence (AI) and Big Data technologies. According to the World Health Organization (WHO), AI and Big Data have performed an important role in China's response to COVID-19, the genetic mutation name for coronavirus. Predicting an epidemic emergence, from the corona virus appearance to a person's predisposition to develop it, is fundamental to combating it. In this battle, Big Data is on the front line. However, Big Data cannot provide all of the expected insights and derive value from manipulated data. This is why we propose a semantic approach to facilitate the use of these data. In this paper, we present a novel approach that combines between the Semantic Web Services (SWS) and the Big Data characteristics in order to extract a significant information from multiple Data sources that can be exploitable for generating real-time statistics and reports.

Highlights

  • The 2019-2020 coronavirus is a pandemic of an emerging infectious disease, called COVID-19, caused by the coronavirus SARS-CoV-2, which begins in December 2019 in Wuhan, central China, and spreads in all over the world [1]

  • Protégé is used since it is an open source tool and allows easy knowledge construction of ontologies domain [30]

  • We focus on building local ontologies

Read more

Summary

Introduction

The 2019-2020 coronavirus is a pandemic of an emerging infectious disease, called COVID-19, caused by the coronavirus SARS-CoV-2, which begins in December 2019 in Wuhan, central China, and spreads in all over the world [1]. Is the definition of Gartner: Big Data brings together data of great variety, arriving in increasing volumes, at high speed. This is called the three "V's" [10]. Big Data is made up of complex datasets, mostly from new sources These datasets are so large that traditional data processing software cannot handle them. This huge amount of data can be used to solve problems that never could have solved before. Storing data into Data Lake without any data management is one of the Big Data challenges To handle this fact, an extraction of consistent knowledge from such data is needed

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.