Abstract

In the context of the Semantic Web, many ontology-related operations can be boiled down to one fundamental task: finding as accurately as possible the semantics hiding beneath the superficial representation of ontological entities. This, however, is not an easy task due to the ambiguous nature of semantics and a lack of systematic engineering method to guide how we comprehend semantics. We acknowledge the gap between human cognition and knowledge representation formalisms: even though precise logic formulae can be used as the canonical representation of ontological entities, understanding of such formulae may vary. A feasible solution to juxtaposing semantics interpretation, therefore, is to reflect such cognitive variations. In this paper, we propose an approximation of semantics using sets of words/phrases, referred to as WɪKɪmantic vectors. These vectors are emerged through a set of well-tuned methods gradually surfacing the semantics that remain implicit otherwise. Given a concept, we first identify its conceptual niche amongst its neighbours in the graph representation of the ontology. We generate a natural language paraphrases of the isolated sub-graph and project this textual description upon a large document repository. WɪKɪmantic vectors are then drawn from the document repository. We evaluated each of the aforementioned steps by way of user study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call