Abstract

The development and continuous expansion of the transformer deep-learning architecture have produced enormous effects across various domains, including but not limited to natural language processing. The power of deep learning models has sparked a fresh interest in commonsense knowledge, which has been aided by transformer-based language models. Most of the recent research has concentrated on delving into the commonsense already built into these models' pre-trained parameters and finding ways to fill in any gaps in commonsense utilizing knowledge graphs and fine-tuning. In order to broaden a limited commonsense knowledge network that was originally generated solely from visual data, we are building on the demonstrated linguistic understanding of extremely large transformer-based language models. Compared to language models that are fine-tuned on a huge starting corpus, few-shotprompted pre-trained models are able to acquire the context of an initial knowledge graph with less bias. It has also been demonstrated that these models can contribute novel ideas to the visual knowledge networkIt is a new development in the field of commonsense knowledge generation that, as far as we can tell, can lead to a fivefold decrease in cost when compared to the current state of the art. Fuzzy language names assigned to the produced triples are another addition. Applying knowledge graphs as a framework, the procedure is comprehensive. It implies that the triples are expressed in natural language, analyzed, and then added to the commonsense knowledge network as triples again.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call