Reinforcement learning (RL) has achieved some notable accomplishments in robotics. However, one of the major challenges when applying RL in the real world is that unexpected disturbances in local dynamics could lead to failures of policies at test time. It is necessary to learn a model capable of adapting to various environments with different dynamics. In this paper, a Dynamic-Aware reinforcement learning model with graph-based rapid adaptation (DAGA) is proposed to address these challenges. DAGA encodes the dynamic features from a few interactions and guides the policy with an environment embedding. To encourage the embedding to capture variations in dynamics, we present an objective function based on forward prediction and environment similarity. The proposed model enables the robot to generalize with a wide range of transition dynamics resulted from different hardware parameters. Experiments in robot locomotion and manipulation tasks show that DAGA outperforms existing baselines in both better sample efficiency and generalization. DAGA has the potential to deploy the RL policy in realistic and changing environments.