Abstract
The COVID-19 pandemic has been truly global and multidimensional in scope, with ramifications extending well beyond health. Yet, unlike previous crises, there is hope that timely release of relevant data sets, as well as advents in AI (artificial intelligence) technology, could lead to compressed timescales in finding a vaccine or cure. Despite the huge existing body of academic literature on the coronavirus family, searching through such a corpus, including new research that has emerged in the wake of the crisis, is a daunting task even for experts. Simple keyword search over such corpora is insufficient for experts who want answers to questions that require linking together multiple pieces of information across documents. In this article, we review an innovative AI technology called a knowledge graph (KG) that could be used to fulfill such complex information needs. We detail the potential for KGs to play an important role in the fight against COVID-19. We also cover challenges and ongoing collaborative implementations of COVID-19 KGs in industry and academia.
Highlights
In a recent, instructive article titled “Hoping to Understand the Virus, Everyone Is Parsing a Mountain of Data,” the New York Times correspondent Julie Bosman uses a striking example of recent coronavirus case counts to show how difficult it is to draw conclusions without analyzing multiple data points in the right context (Bosman, 2020)
Many improvements can be traced to sustained investment in the kinds of knowledge-centric artificial intelligence (AI) technologies that we describe in this overview (Lockard et al, 2018; Singhal, 2012)
We review a novel kind of AI technology called a knowledge graph (KG) that is designed to bring us one step closer to Vannevar Bush’s original vision
Summary
Instructive article titled “Hoping to Understand the Virus, Everyone Is Parsing a Mountain of Data,” the New York Times correspondent Julie Bosman uses a striking example of recent coronavirus case counts to show how difficult it is to draw conclusions without analyzing multiple data points in the right context (Bosman, 2020). The problem is that, for all their speed and ‘smarts,’ computers are still far from being a natural and ‘intimate’ supplement to our memory in that they are not able to understand, let alone answer, the kinds of questions over ‘mountains’ of data that subject matter experts would have been able to answer if only they had been able to read, process, and retain that data Put another way, machines can process great amounts of data, but the processing does not necessarily yield knowledge that is critical for solving real-world problems (Robbins, 2019). Building machines that understand and work with knowledge, rather than just data, is one of the holy grails of general AI (Russell & Norvig, 2009) If such programs were to become available, even within limited domains, they would significantly accelerate scientific progress by providing answers to complex questions that may today require many hours of reading, even by subject matter experts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.