Abstract
A cognitively-autonomous artificial agent may be defined as one able to modify both its external world-model and the framework by which it represents world, requiring two simultaneous optimization objectives. This presents deep epistemological issues centered on the question of how a framework for representation (as opposed to the entities it represents) may be objectively validated. In this summary paper, formalizing previous work in this field, it is argued that subsumptive perception-action learning uniquely has the capacity to resolve these issues by a) building the perceptual hierarchy from the bottom up so as to ground all proposed representations and b) maintaining a bijective coupling between proposed percepts and projected action possibilities to ensure empirical falsifiability of these grounded representations. In doing so, we will show that such subsumptive perception-action learners intrinsically incorporate a model for how intentionality emerges from randomized exploratory activity in the form of 'motor babbling'. Moreover, such a model of intentionality also naturally translates into a model for human-computer interfacing that makes minimal assumptions as to cognitive states.
Highlights
Significant deficits have been apparent in traditional approaches to embodied computer vision for some time (Dreyfus, 1972)
Visuo-haptic data arising from these actions will typically be used to further constrain the environment model, either actively or passively (in active learning the agent actions are driven by the imperative of reducing ambiguity in the environment model (Koltchinskii, 2010; Settles, 2010))
Emergent Intentionality in Perception-Action Subsumption. This disparity is manifested in classical problems such as framing (McCarthy and Hayes, 1969) and symbol grounding. (The latter occurs when abstractly manipulated symbolic objects lack an intrinsic connection to the real-world objects that they represent; a chess-playing robot typically requires a prior supervised computer vision problem to be solved in order to apply deduced moves to visually presented chess pieces.)
Summary
Significant deficits have been apparent in traditional approaches to embodied computer vision for some time (Dreyfus, 1972). Visuo-haptic data arising from these actions will typically be used to further constrain the environment model, either actively or passively (in active learning the agent actions are driven by the imperative of reducing ambiguity in the environment model (Koltchinskii, 2010; Settles, 2010)) It is apparent, in this approach, that there exists a very wide disparity between the visual parameterization of the agent’s domain and its action capabilities within it (Nehaniv et al, 2002). Perception-Action (P-A) learning was proposed in order to overcome these issues, adopting as its informal motto, “action precedes perception” (Granlund, 2003; Felsberg et al, 2009) By this it is meant that, in a fully formalizable sense, actions are conceptually prior to perceptions; i.e., perceptual capabilities should depend on action-capabilities and not vice versa. It will be the argument of this article that perception-action learning, as well as having this capacity to resolve fundamental epistemic questions about emergent representational capacity, naturally gives a model for emergent intentionality that applies to both human and artificial agents, and may be deployed as an effective design-strategy in human–computer interfacing
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.