Lately, large language models (LLMs) such as ChatGPT and Llama have gained significant attention. These models demonstrate a remarkable capability to solve complex tasks, drawing knowledge primarily from a generalized database rather than niche subject areas. Consequently, there has been a growing demand for domain-specific LLMs tailored to social and natural sciences, such as BloombergGPT or BioGPT. In this study, we present a domain-specific LLM focused on real estate: Real-GPT. The model is based on the parameter-efficient fine-tuning technique known as quantized low-rank adaptation (QLoRA) applied to Mistral 7B. To create a comprehensive fine-tuning dataset, we compiled a curated 21,000 self-instruction dataset sourced from 670 scientific papers, market research, scholarly articles, and real estate books. To assess the efficacy of Real-GPT, we devised a set of 1046 multiple-choice questions to gauge the real estate knowledge of the models. Furthermore, we tested Real-GPT in a real investment scenario. Despite its compact size, our model significantly outperforms GPT 3.5 and Mistral 7B. Hence, the model not only showcases superior performance but also illustrates its capacity to facilitate investment decisions or interpret current market data. This development showcases the potential of LLMs to revolutionize real estate analysis and decision-making.