Abstract

Natural Language processing (NLP) derives its roots from artificial intelligence and computational linguistics. The proliferation of large-scale web corpora and social media data as well as advances in machine learning and deep learning have led to practical applications in diverse NLP areas such as machine translation, information extraction, named entity recognition (NER), text summarization and sentiment analysis. Named-entity recognition (NER), is a sub task of information extraction that seeks to discover and categorize specific entities such as nouns or relations in unstructured text. In this paper, we present a review of the foundations three tolerance-based granular computing methods (rough sets, fuzzy-rough sets and near sets) for representing structured (documents) and unstructured (linguistic entities) text. Applications of these methods are presented via semi-supervised and supervised learning algorithms in labelling relational facts from web corpora and sentiment classification (non-topic based text). The performance of the three presented algorithms is discussed in terms of bench marked datasets and algorithms. We make the case that tolerance relations provide an ideal framework for studying the concept of similarity for text-based applications. The aim of our work is to demonstrate that approximation structures viewed through the prism of tolerance have a great deal of fluidity and integrate conceptual structures at different levels of granularity thereby facilitating learning in the presented NLP applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call