Abstract

In this era of data-analytics, the unstructured text remains the main data format. The vector space model is commonly used in representing and modeling text semantics; however, it has some limitations. The main alternative for the vector space model is the graph model from graph theory. Then, the question is: On what basis should text semantics be modeled using graph modeling? Using semantic-graphs, cognitive-semantics tries to answer this question, as it models underlying mechanisms of our human cognition modules in learning, representing and expanding semantics. The fact that textual data is produced in the form of human natural language by human cognition skills means that a reverse-engineering methodology could be promising to extract back semantics from text. In this paper, we present a systematic perspective of the main computational graph-based cognitive-semantic models of human memory, that have been used for the semantic processing of unstructured text. The applications, strengths, and limitations of each model are described. Finally, open problems, future work and conclusions are presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call