Federated Learning (FL) is a distributed alternative to traditional machine learning frameworks that computes a global model on a centralized aggregation server according to the parameters of local models, which can address the privacy leakage problem caused by collecting sensitive data from local devices. However, the classic FL methods with synchronous aggregation strategies, in many cases, shall suffer from limitations in resource utilization due to the need to wait for slower devices (stragglers) to aggregate during each training epoch. In addition, the accuracy of the global model can be affected by the uneven distribution of data among unreliable devices in real-world scenarios. Therefore, many Asynchronous Federated Learning (AFL) methods have been developed on many occasions to improve communication efficiency, model performance, privacy, and security. This article elaborates on the existing research on AFL and its applications in many areas. The paper first introduces the concept and development of FL, and then discusses in detail the related work and main research directions of AFL, including dealing with stragglers, staleness, communication efficiency between devices, and privacy and scalability issues. Then, this paper also explores the application of AFL in different fields, especially in the fields of mobile device edge computing, Internet of Things devices, and medical data analysis. Finally, the article gives some outlook on future research directions and believes that it is necessary to design efficient asynchronous optimization algorithms, reduce communication overhead and computing resource usage, and explore new data privacy protection methods.