Dynamic network embedding (DNE) poses a tough challenge in graph representation learning, especially when confronted with the frequent updates of streaming data. Conventional DNEs primarily resort to parameter updating but perform inadequately on historical networks, resulting in the problem of catastrophic forgetting. To tackle such issues, recent advancements in graph neural networks (GNNs) have explored matrix factorization techniques. However, these approaches encounter difficulties in preserving the global patterns of incremental data. In this paper, we propose CLDNE, a Continual Learning framework specifically designed for Dynamic Network Embedding. At the core of CLDNE lies a streaming graph auto-encoder that effectively captures both global and local patterns of the input graph. To further overcome catastrophic forgetting, CLDNE is equipped with an experience replay buffer and a knowledge distillation module, which preserve high-order historical topology and static historical patterns. We conduct experiments on four dynamic networks using link prediction and node classification tasks to evaluate the effectiveness of CLDNE. The outcomes demonstrate that CLDNE successfully mitigates the catastrophic forgetting problem and reduces training time by 80% without a significant loss in learning new patterns.