Abstract
This article proposes applying retrieval-augmented generation (RAG) to improve the skills of low-code developers by augmenting large language models with up-to-date domain-specific knowledge. As low-code development requires combining multiple systems into a final product, developers must consult several sources of documentation and various articles, videos, and forum threads. Such a process may be time-consuming, prompting the use of an LLM for the authoritative answer. However, LLMs often lack knowledge of low-code platforms, leading to hallucinations and superficial responses. RAG utilizes the benefits of LLMs on relevant information, suggesting a presumption that it may be effectively applied in low-code development. Heterogeneous data sources concerning low-code systems are converted to a text representation, split into logical chunks, and stored in a vector database. During the exploitation of the model, cosine similarity is used to retrieve top-K documents and concatenate them with user query, using the produced text as a prompt to an LLM. The results support the hypothesis that RAG models outperform standard LLMs in knowledge retrieval in this domain
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have