Abstract

Competency Questions (CQs) are essential in ontology engineering; they express an ontology's functional requirements as natural language questions, offer crucial insights into an ontology's scope and are pivotal for various tasks, e.g. ontology reuse, testing, requirement specification, and pattern definition. Despite their importance, the practice of publishing CQs alongside ontological artefacts is not commonly adopted. We propose an approach based on Generative AI, specifically Large Language Models (LLMs) for retrofitting CQs from existing ontologies and we investigate how open LLMs (i.e. Llama-2-70b, Mistral 7B and Flan-T5-xl) perform in generating CQs for existing ontologies. We compare these results with our previous efforts using closed-source LLMs and we reflect on the results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.