Abstract

Lately, large language models (LLMs) such as ChatGPT and Llama have gained significant attention. These models demonstrate a remarkable capability to solve complex tasks, drawing knowledge primarily from a generalized database rather than niche subject areas. Consequently, there has been a growing demand for domain-specific LLMs tailored to social and natural sciences, such as BloombergGPT or BioGPT. In this study, we present a domain-specific LLM focused on real estate: Real-GPT. The model is based on the parameter-efficient fine-tuning technique known as quantized low-rank adaptation (QLoRA) applied to Mistral 7B. To create a comprehensive fine-tuning dataset, we compiled a curated 21,000 self-instruction dataset sourced from 670 scientific papers, market research, scholarly articles, and real estate books. To assess the efficacy of Real-GPT, we devised a set of 1046 multiple-choice questions to gauge the real estate knowledge of the models. Furthermore, we tested Real-GPT in a real investment scenario. Despite its compact size, our model significantly outperforms GPT 3.5 and Mistral 7B. Hence, the model not only showcases superior performance but also illustrates its capacity to facilitate investment decisions or interpret current market data. This development showcases the potential of LLMs to revolutionize real estate analysis and decision-making.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.