Abstract
In this era of data-analytics, the unstructured text remains the main data format. The vector space model is commonly used in representing and modeling text semantics; however, it has some limitations. The main alternative for the vector space model is the graph model from graph theory. Then, the question is: On what basis should text semantics be modeled using graph modeling? Using semantic-graphs, cognitive-semantics tries to answer this question, as it models underlying mechanisms of our human cognition modules in learning, representing and expanding semantics. The fact that textual data is produced in the form of human natural language by human cognition skills means that a reverse-engineering methodology could be promising to extract back semantics from text. In this paper, we present a systematic perspective of the main computational graph-based cognitive-semantic models of human memory, that have been used for the semantic processing of unstructured text. The applications, strengths, and limitations of each model are described. Finally, open problems, future work and conclusions are presented.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.