Abstract

This paper shows, using as well philosophical as logical arguments, the deficiency of the belief, that universal intelligence could be based on the 'physical symbol systems hypothesis'1 as it was stated by Newell & Simon. In spite of the (sometimes outstanding) results of orthodox AI, it seems that AI finds it self in a dead end, which is based on the incorrect assumptions of PSSH. The (thought) experiment of SEARLE's Chinese room shows best the problem of symbolic AI: obviously there exist more levels of understanding -- it can be seen as manipulating symbols (which have in fact no meaning) or as a subsymbolic process of spreading (nervous) activations. There is shown, why manipulating meaningless symbols is not the sufficient means for intelligent action:• it is the meaning and the history which one closely combines and associates with symbols and sentences - this can not be found in orthodox AI-systems because the meaning is the interpretation of the observer (user).• it is the relation (junction) to the physical world being responsible for the meaning of a symbol -- a symbolic AI-system however does not have this immediate junction to the real world.• thus it is 'lifted up' from the real world and will never be able to have direct access to it; i.e. it lacks the 'In-der-Welt-Sein2.• as an orthodox AI-system only uses symbols they are only capable of processing what can be said in natural language -- as M.Polanyi shows, there exists much more knowledge than there can be said in words.• our language is only the 'surface' of our thinking -- i.e. subsymbolic processes are responsible for our utterances.• thus symbolic AI tries to formulate and simulate processes which can be observed from an outer point of view -- it is trying to model processes which are only the last step of our thinking -- linguistic utterances. There is lost much of the subsymbolic information which is very important for our decision making, learning and problem solving.• when trying to represent knowledge with symbols there has to be a process of formalizing a domain -- as I have stated already there is lost very much information in this process.• symbolic AI only uses deduction for 'enlarging' or gaining new knowledge. Also learning systems which are capable of inductive learning only apply deductive rules for realizing induction.There is shown a way out of this dilemma: neural computing (i.e. connectionism, parallel distributed processing). This new paradigm seems to satisfy those critiques having been stated in symbolic AI:• the 'In-der-Welt-Sein' is based on a sensory input and effective output.• this input/output is not symbolic but it is coded analogously -- i.e. there is not used a symbolic or mathematical code; there is only percepted the intensity of the stimulus.• direct interaction with the environment by using a non-symbolic code.• learning: there exist very powerful algorithms for learning and categorizing (inductive learning).• independence of domain is obtained by learning on a subsymbolic (meaningless) level. Thus there is also the possibility for learning non-symbolic information in a process of self-organisation.There is need of a system being capable of both symbolic and subsymbolic processes -- a hybrid system which has to be based however on a subsymbolic and 'neural machine'.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.