Abstract

An implemented model of language processing has been developed that views the propositional components of a sentence as neural units. The propositional sentence units are linked through symbolic, reified representations of subordinate sentence parts. Large numbers of these highly standardized propositional units are encoded in a manner that interconnects propositional data through the declarative knowledge base structures, thus minimizing the importance of the procedural component and the need for backward chaining and inference generation. The introduction of new sentence information triggers a connectionist-like flurry of activity in which constantly changing propositional weights and reification strengths effect changes in the belief states encoded within the knowledge base. ©1999 John Wiley & Sons, Inc.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call