Abstract

There have been several recent attempts at using Artificial Intelligence systems to model aspects of consciousness (Gamez, 2008; Reggia, 2013). Deep Neural Networks have been given additional functionality in the present attempt, allowing them to emulate phenological aspects of consciousness by self-generating information representing multi-modal inputs as either sounds or images. We added these functions to determine whether knowledge of the input's modality aids the networks' learning. In some cases, these representations caused the model to be more accurate after training and for less training to be required for the model to reach its highest accuracy scores.

Highlights

  • Discussions about the properties of consciousness have taken place within multiple disciplines

  • We found that it is possible to extract simple representations of the modality being processed by the Deep Neural Networks (DNN) using information from the hidden layers within that network

  • The difficulty changes resulted from the modifications made to hidden layers weights by training the neural network; this training was not affected by the Hebbian classifiers

Read more

Summary

Introduction

Discussions about the properties of consciousness have taken place within multiple disciplines. By recreating properties of consciousness within AI models, those researchers can test those properties' effects directly This is the approach that has been used in this article and has been used by others previously (Arrabales et al, 2010b; Zaadnoordijk and Besold, 2018; Schartner and Timmermann, 2020). We used these simple representations to aid other DNNs to classify the same multi-modal data The purpose of these experiments was to determine whether the addition of these representations improved network performance. Others believe that specific behaviours are required; if something possesses functions equivalent to consciousness, it is conscious Some researchers use this functional definition when describing robots demonstrating behaviour such as mirror self-recognition (Takeno et al, 2005) and other self-awareness research (Chella et al, 2020). Defining consciousness architecturally or functionally differs from the neuroscience approach of focusing on biological consciousness by examining the functioning of structures within the (typically human) brain (see Dehaene, 2014 for review)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call