Abstract

The Grounding of Symbols in Affordances William H. Vidal Scientific and Philosophical Studies of the Mind Program Franklin and Marshall College Lancaster, PA 17604 Wh_vidal@fandm.edu Affordances vs. Experience In the past two decades, the development of artificial intelligence has received thorough criticism from many philosophers. In his paper, The Symbol Grounding Problem (1990), Harnad attempts to prove A.I. systems’ inaptitude to ground computational symbols in experience. To determine whether experience is the only criteria for grounding, a non-representationalist framework to artificial systems must be applied. This paper is an attempt to demonstrate the ability of artificial systems to ground their symbols in the potential activity afforded by their environment. To illustrate the grounding of symbols in affordances the analysis is presented in terms of Marr’s three descriptive levels (Marr, 1983). Within the computational level, I present an ecological model of perception, as well as, how the behavior of an A.I. system is intelligible in terms of affordances. The placement of an A.I. system’s behavior within the broader context of its environment widens the potential for grounding and draws the focus away from inner formal-symbol operations. Following the establishment of the environment’s relevancy in A.I. systems’ perceptual mechanism is an examination of the representational level. The program, at the representational level, represents the bridge between a system’s ecological perceptual mechanism and the implementation of stimulus information. I conclude the analysis with the implementation layer’s implications on which symbols need grounding and the causal link between the reception of stimulus information and motor commands. The symbol Grounding Problem The problem theoretically arises because the symbols in an A.I. system’s representation layer are manipulated formally according to preset programmed rules and do not have any causal connections with the exterior world. In other words, the symb ol grounding problem highlights the lack of connectedness between the symbols within the programmed layer of an A.I. system and the exterior environment. Whether one is trying to prove A.I. systems can have intentionality, develop a potential humanoid, or examine the replication of human processes through artificial intelligence, the symbol grounding problem poses a barrier. How can one potentially make a machine that has meaningful thoughts, if its symbols are detached from any form of reality? According to Harnad, human mental symbols are grounded in our daily interactions with the exterior world. The association of symbols with memories leads Harnad to equate symbol grounding’s constitutive element to experience and memories. (Harnad, 1990) An Ecological Model Within the bounds of his analysis, Harnad forwards valid criticisms of A.I. systems. Indeed, from a program layer investigation and a focus on internal abstract computation, A.I. systems do not have any connections with the exterior world. However, one is not claiming that a system’s variables and design do not originate from a programmer and a system’s general concept is the realization of a programmer’s abstractions. These facts simply entail that the construction of an A.I. system is initially based on a designer’s abstractions. Although the behavioral and program layer are the components we perceive and control, a grounding mechanism’s processes reside within the implemen- tation layer. An affordance structured analysis grounds an A.I. system’s symbols by appealing to stimulus information, as opposed to the traditionalist appeal to causal energy connections. More precisely, the theory of affordances and direct perception enables a system to implement behavioral modifications through opportunities for actions specified in optic arrays. An alternate grounding mechanism must fulfill the three central tenets of symbol grounding theory: meaningful perception, purposeful action, and environment dependent symbols. Direct perception and Gibson’s ecological approach forwards a dynamic model of action grounded in affordances that satisfies these three criteria. The symbol grounding problem is perspective dependent and is in nature, only a theoretical conflict. References Harnad (1990). The Symbol Grounding Problem. Physica, D 42: 335-346. Marr, D. (1983). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. New York: W.H. Freeman and Co.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.