Abstract

The use of semantics technologies for system development is increasing nowadays. The knowledge representation in ontologies can be used in a lot of applications, ranging from knowledge-based recommendation systems until the Semantic Web applications. The core component of the semantic applications is the logical inference engine, which process and generates new facts into the knowledge base from production rules. The inference enginés performance is directly related to the length of the knowledge base and the necessities of the nowadays knowledge bases are becoming a challenge. This paper presents a knowledge basés search algorithm for the RETE Algorithm which uses the intrinsic parallel structures from the modern computers, augmenting the performance of the inference engine. We implemented a Threads based and a GPU based search engine and compared their performance. We point as main contributions of this paper the parallel system that implements the search engine and the algorithm for vectorization of the knowledge base.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call