High-dimension inputs limit the sample efficiency of deep reinforcement learning, increasing the difficulty of applying it to real-world continuous control tasks, especially in uncertain environments. A good state embedding is crucial for the performance of agents on downstream tasks. The bisimulation metric is an excellent representation learning method that can abstract task-relevant invariant latent embeddings of states based on behavior similarity. However, since only one time-step transition is considered, the features captured by this method we call short-term dynamics. We think long-term dynamics are also important for state representation learning. In this paper, we present Invariant Representations Learning with Future Dynamics (RLF), which uses graph neural networks to learn long-term dynamics and trains the representation network based on a new state metric inspired by bisimulation relation. We experimented with our method on continuous control tasks from DeepMind Control Suite and showed that the RLF can mine more stable embeddings than the state-of-the-art representation learning methods for both state and pixel inputs. The learned policy on top of the embeddings has higher sample efficiency and performance and generalizes well in different tasks.