Abstract
In this essay we critically evaluate the progress that has been made in solving the problem of meaning in artificial intelligence (AI) and robotics. We remain skeptical about solutions based on deep neural networks and cognitive robotics, which in our opinion do not fundamentally address the problem. We agree with the enactive approach to cognitive science that things appear as intrinsically meaningful for living beings because of their precarious existence as adaptive autopoietic individuals. But this approach inherits the problem of failing to account for how meaning as such could make a difference for an agent’s behavior. In a nutshell, if life and mind are identified with physically deterministic phenomena, then there is no conceptual room for meaning to play a role in its own right. We argue that this impotence of meaning can be addressed by revising the concept of nature such that the macroscopic scale of the living can be characterized by physical indeterminacy. We consider the implications of this revision of the mind-body relationship for synthetic approaches.
Highlights
How can we design artificial agents such that their encounters with the world makes sense to them, that is, such that the meaningful aspects of those encounters are experienced from their own intrinsic perspective as relevant? This is the problem of meaning, which has haunted artificial intelligence (AI) since the beginning of the field [1]
If this nature– strategy is on the right track, it would mean that digital computers and classical dynamical systems more generally are inherently unsuitable frameworks for embodying meaning, given that they are complete and deterministic systems
The upshot is that if we want to design artificial systems that solve the problem of meaning, we have to build them such that their objective determinations can partially withdraw so as to make room for subjective influences to be able to make a difference in their own right
Summary
How can we design artificial agents such that their encounters with the world makes sense to them, that is, such that the meaningful aspects of those encounters are experienced from their own intrinsic perspective as relevant? This is the problem of meaning, which has haunted artificial intelligence (AI) since the beginning of the field [1]. Even the enactive approach, which has made a substantial effort to account for value and meaning in a non-representational manner [7], leaves it mysterious how the subjective, i.e. value, meaning, intention, purpose, etc., as such, on its own terms, could make a difference for the movements of an agent, if it is assumed that its internal and external activity is already completely governed by purely dynamical laws. We conclude by discussing the implications of this revised concept of nature for the design of artificial systems such that they make room for meaning to play a role
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.