Abstract

Despite recent Artificial Intelligence (AI) advances in narrow task areas such as face recognition and natural language processing, the emergence of general machine intelligence continues to be elusive. Such an AI must overcome several challenges, one of which is the ability to be aware of, and appropriately handle, context. In this article, we argue that context needs to be rigorously treated as a first-class citizen in AI research and discourse for achieving true general machine intelligence. Unfortunately, context is only loosely defined, if at all, within AI research. This article aims to synthesize the myriad pragmatic ways in which context has been used, or implicitly assumed, as a core concept in multiple AI sub-areas, such as representation learning and commonsense reasoning. While not all definitions are equivalent, we systematically identify a set of seven features associated with context in these sub-areas. We argue that such features are necessary for a sufficiently rich theory of context, as applicable to practical domains and applications in AI.

Highlights

  • Artificial Intelligence (AI) has made enormous strides in the last several years both due to the advent of technologies such as deep learning [1], and the cost-effectiveness and pervasiveness of supporting infrastructures, such as the evolution of GPUs [2], cloud computing [3], and open datasets and software [4]

  • While we do not claim that these are the only areas pertinent to the research of context-rich AI, we use them as our primary basis for enumerating the requirements that a potential theory of context must fulfill to be generalizable to several AI areas that rely on context

  • We explored the different definitions of context, and argued that, despite the prevalence and maturity of context-rich AI research, context is still treated implicitly in much of this research

Read more

Summary

Introduction

Artificial Intelligence (AI) has made enormous strides in the last several years both due to the advent of technologies such as deep learning [1], and the cost-effectiveness and pervasiveness of supporting infrastructures, such as the evolution of GPUs [2], cloud computing [3], and open datasets and software [4]. While many ‘dictionary’ (and in a few cases, academic) definitions of context exist (as further detailed in Section 2), and there is commonsense understanding of the word, there is lack of clarity between how such definitions map to current research and practice in the AI community With this motivation in mind, we argue that context needs to be conceptualized as a novel class of information in intelligent systems. Through a direct discussion of context can we start to distinguish between the different kinds of context, and disambiguate the varying, conceptually overloaded ways in which context is used as a term in professional practice To this end, we discuss how context is understood and applied in various influential sub-fields in AI, including representation learning, Semantic Web, and explainable AI (Section 3). Our more modest goal is to systematically discuss context, as a first-class citizen and novel class of information, in AI systems and applications that are expected to interface and interact with humans in complex problem domains, and that continue to be active subjects of research

Background on Context
Example Usage
Research in Context-Rich Artificial Intelligence
Representation Learning
Commonsense Reasoning and Knowledge
Knowledge Graphs and Semantic Web
Explainable AI
Understanding Context in Practical AI Research
Locality
Selective Activation of Salient Elements
Relational Dependencies
Implicitness
Open-World Environments
Event-Driven Triggers
Supporting Ecosystems and Social Factors
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call