Abstract
Several researchers have proposed a new application for human augmentation, which is to provide human supervision to autonomous artificial intelligence (AI) systems. In this paper, we introduce a framework to implement this proposal, which consists of using Brain–Computer Interfaces (BCI) to influence AI computation via some of their core algorithmic components, such as heuristic search. Our framework is based on a joint analysis of philosophical proposals characterising the behaviour of autonomous AI systems and recent research in cognitive neuroscience that support the design of appropriate BCI. Our framework is defined as a motivational approach, which, on the AI side, influences the shape of the solution produced by heuristic search using a BCI motivational signal reflecting the user’s disposition towards the anticipated result. The actual mapping is based on a measure of prefrontal asymmetry, which is translated into a non-admissible variant of the heuristic function. Finally, we discuss results from a proof-of-concept experiment using functional near-infrared spectroscopy (fNIRS) to capture prefrontal asymmetry and control the progression of AI computation of traditional heuristic search problems.
Highlights
Introduction and RationaleHuman augmentation aims at extending human cognitive abilities, often in a situated, task-specific fashion
Current integrative models of executive function control [32] distinguish between hot and cold executive control and tend to associate the dorsolateral prefrontal cortex (DLPFC) with cold control and the orbitofrontal cortex (OFC) with motivation and reward anticipation. This would be consistent with source-localisation studies, which have suggested that frontal electrical signal form (EEG) asymmetry at rest is mediated by left DLPFC and OFC activation [33]
This is the solution we have adopted in previous work, owing to the response time of the functional near-infrared spectroscopy (fNIRS) signal: it could still be of interest even when using EEG-based input frontal asymmetry scores because of the signal dynamics and the need to stabilise it over the NF epoch
Summary
Human augmentation aims at extending human cognitive abilities, often in a situated, task-specific fashion. Researchers across a variety of disciplines have taken the stage to forewarn of the potential adverse consequences of unregulated AI progress, amongst which the automation of white-collar jobs [6], the development of AI-endowed autonomous warfare [7], or even the rise of superintelligent AI entities [8,9] Whether or not this superintelligence threat will materialise, the shorter-term availability of advanced AI systems able to outperform human experts at an increasing number of professional tasks is sufficient to justify research into hybrid cognitive systems. Several authors have suggested human augmentation as a potential solution to the threat posed by superintelligence, augmentation being often achieved through BCI implementations Most of these proposals remain largely underspecified, and some are not always consistent with the state-of-the-art of BCI systems, it is worth briefly reviewing the commonalities between them. We take a system design perspective to review the conditions for a successful implementation of the framework, as well as possible implementation variants
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.