Large language models have demonstrated impressive language processing capabilities in recent years, exhibiting unparalleled excellence in the field of natural language processing. However, the generated text sometimes contains hallucinations, which is the text that contradicts the knowledge in the real world, the context, and the user input. This problem is mainly due to the inherent limitations of the method itself in aspects such as data quality, the model training process, and the model generation process. The issue of hallucinations has always been closely monitored by the academic community. It is widely recognized that its potential consequences should not be underestimated. This paper systematically summarizes the research on the causes of hallucinations in large language models, and introduces mainstream classification methods as well as current measures to address the issue of hallucinations. To be more specific, the article divides the causes of hallucinations into two categories: 1. hallucinations come from the training process and 2. hallucinations come from the generation process. Also, 4 typical types of causes for the former and 5 typical types of causes for the latter are provided. Simultaneously, a detailed discussion of 16 methods to mitigate hallucinations that arise in the generation process is offered. Finally, this paper also discusses inherent flaws that may exist in large language models, aiming to help people gain a more comprehensive understanding and research into hallucinations and large language models. In general, the text details about the hallucinations that exist in the large language model. Meanwhile, according to the previous research, it is pointed out that it is difficult for the large language model based on autoregressive method for token prediction to avoid the hallucinations completely.