This study explores the role of Large Language Models (LLMs) in information retrieval within the digital environment through a theoretical analysis of their concepts, operational mechanisms, and a comparison with traditional methods, alongside identifying key challenges and contemporary applications. The findings reveal that LLMs represent a qualitative shift in processing natural language due to their ability to understand context and generate precise responses. The study highlights their superiority in enhancing retrieval systems through integration with cognitive technologies such as Retrieval-Augmented Generation (RAG) and Knowledge Graphs (KGs), thereby improving the reliability and effectiveness of results, especially in specialized domains. Despite these capabilities, the study identifies technical and methodological challenges, including hallucination and limited interpretability. It emphasizes that LLMs do not replace traditional retrieval methods but complement them, depending on task nature and user behavior. The study recommends developing hybrid models, enhancing multimodal capabilities, and expanding real-world evaluations—particularly in low-resource languages and specialized fields. It concludes that integrating LLMs with structured knowledge representations offers a promising path toward building more accurate, equitable, and intelligent information retrieval systems.
Read full abstract Year 

Publisher 

Journal 

1
Institution 

Institution Country 

Publication Type 

Field Of Study 

Topics 

Open Access 

Language 

Reset All
Cancel
Year 

Publisher 

Journal 

1
Institution 

Institution Country 

Publication Type 

Field Of Study 

Topics 

Open Access 

Language 

Reset All