Abstract

AbstractIn recent years, for systemizing enormous information on the Internet, ontology that organizes knowledge through a hierarchical structure of concepts has received a large amount of attention in spatiotemporal information science. However, constructing ontology manually requires a large amount of time and deep knowledge of the target field. Consequently, automating ontology generation from raw text corpus is required to meet the ontology demand. As an initial attempt of ontology generation with a neural network, a recurrent neural N = network (RNN)-based method is proposed. However, updating the architecture is possible because of the development in natural language processing (NLP). In contrast, the transfer learning of language models trained by a large unlabeled corpus such as bidirectional encoder representations from transformers (BERT) has yielded a breakthrough in NLP. Inspired by these achievements, to apply transfer learning of language models, we propose a novel workflow for ontology generation consisting of two-stage learning. This paper provides a quantitative comparison between the proposed method and the existing methods. Our result showed that our best method improved accuracy by over 12.5%.KeywordsOntologyAutomationNatural Language Processing (NLP)Pretrained model

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call