Abstract
Abstract The relationship between philosophy and research on artificial intelligence (AI) has been difficult since its beginning, with mutual misunderstanding and sometimes even hostility. By contrast, we show how an approach informed by both philosophy and AI can be productive. After reviewing some popular frameworks for computation and learning, we apply the AI methodology of “build it and see” to tackle the philosophical and psychological problem of characterizing perception as distinct from sensation. Our model comprises a network of very simple, but interacting agents which have binary experiences of the “yes/no”-type and communicate their experiences with each other. When does such a network refer to a single agent instead of a distributed network of entities? We apply machine learning techniques to address the following related questions: i) how can the model explain stability of compound entities, and ii) how could the model implement a single task such as perceptual inference? We thereby find consistency with previous work on “interface” strategies from perception research. While this reflects some necessary conditions for the ascription of agency, we suggest that it is not sufficient. Here, AI research, if it is intended to contribute to conceptual understanding, would benefit from issues previously raised by philosophy. We thus conclude the article with a discussion of action-selection, the role of embodiment, and consciousness to make this more explicit. We conjecture that a combination of AI research and philosophy allows general principles of mind and being to emerge from a “quasi-empirical” investigation.
Highlights
1.1 AI and PhilosophyThe relationship between philosophy and research on artificial intelligence (AI) has been difficult since its beginning, with John Lucas, Hubert Dreyfus, John Searle, and Roger Penrose,[1] among many others, arguing that AI is impossible, pointless, or misguided
The rules which evolved after 50 generations of the GA favor “interface strategies”,53 which in this context means that generically there is no similarity between input states and asymptotic states of the network
The perceptual states of the network do not mirror any structure in the input other than a probabilistic relationship given by the posterior p(x|i) which informed fitness payoff (= mutual information)
Summary
The relationship between philosophy and research on artificial intelligence (AI) has been difficult since its beginning, with John Lucas, Hubert Dreyfus, John Searle, and Roger Penrose,[1] among many others, arguing that AI is impossible, pointless, or misguided. While the questions of philosophy may not be solved by AI, some of them can be translated into a language suitable to be explored using logic, computer modeling or other methods of AI research Such questions are generically of the following type: provided any particular model, how can one study phenomenon x with the help of such methods? Specifying stable patterns and solving a particular task (e.g. perception) are, we claim, necessary conditions for regarding such collective systems as agents in their own right. They are not yet sufficient, a point we return to in our closing discussion. This would make the appearance of a single unified entity more cogent
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.