Federated learning (FL) has been widely adopted to train machine learning models over massive data in edge computing. However, machine learning faces critical challenges, e.g., data imbalance, edge dynamics, and resource constraints, in edge computing. The existing FL solutions cannot well cope with data imbalance or edge dynamics, and may cause high resource cost. In this paper, we propose an adaptive asynchronous federated learning (AAFL) mechanism. To deal with edge dynamics, a certain fraction <inline-formula><tex-math notation="LaTeX">$\alpha$</tex-math></inline-formula> of all local updates will be aggregated by their arrival order at the parameter server in each epoch. Moreover, the system can intelligently vary the number of local updated models for global model aggregation in different epochs with network situations. We then propose experience-driven algorithms based on deep reinforcement learning (DRL) to adaptively determine the optimal value of <inline-formula><tex-math notation="LaTeX">$\alpha$</tex-math></inline-formula> in each epoch for two cases of AAFL, single learning task and multiple learning tasks, so as to achieve less completion time of training under resource constraints. Extensive experiments on the classical models and datasets show high effectiveness of the proposed algorithms. Specifically, AAFL can reduce the completion time by about 70 percent and improve the learning accuracy by about 28 percent under resource constraints, compared with the state-of-the-art solutions.