Abstract

This work describes a way by which knowledge bases (KB), in the form of semantic networks, can be represented by a particular weightless neural network model, the GNU (generalising neural unit). It also presents a new strategy for 'spreading' in such networks, that is, how generalising capabilities are instilled in the network. The case described here concerns primarily how neural networks can acquire semantic power and has implications for discussing possibilities for knowledge bases which involve some degree of common-sense reasoning. The idea of partitioning the external fields in order to give them structural semantic power has proved appropriate to discuss inheritance properties and overlapping concepts in semantic networks. A discussion of full versus sparse connectivity is justified in order to build up low cost applications as far as computational resources are concerned. The relationship between sparse connectivity and the ability to perform is discussed in relation to the required semantic tasks. The results show that the retrieval performance as indicated by three different measurements, is improved with the use of a novel spreading strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call