Abstract
This paper presents our contribution to the Large Language Model For Ontology Learning (LLMs4OL) challenge hosted by ISWC conference. The challenge involves extracting and classifying various ontological components from multiple datasets. The organizers of the challenge provided us with the train set and the test set. Our goal consists of determining in which conditions foundation models such as BERT can be used for ontologies learning. To achieve this goal, we conducted a series of experiments on various datasets. Initially, GPT-4 was tested on the wordnet dataset, achieving an F1-score of 0.9264. Subsequently, we performed additional experiments on the same dataset using BERT. These experiments demonstrated that by combining BERT with rule-based methods, we achieved an F1-score of 0.9938, surpassing GPT-4 and securing the first place for term typing on the Wordnet dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.