Spacecraft relative trajectory planning is central to many space missions like on-orbit service and debris removal. While early attempts of machine-learning-based trajectory planning have been witnessed, they still face the open problem of how to acquire sufficient training samples to conduct robust training. In this article, we have introduced a meta-learning framework to improve the adaptation ability of the planner when facing new initial conditions. To achieve this goal, we divided the training trajectories as sub-training samples and fake testing samples. Then, the meta planner is trained by repeatedly conducting a trail training-and-testing process. To this end, the gradient information of the meta learner is initially obtained on the subtraining sets and is further adjusted by looking at its testing performance on those fake testing data. Therefore, the meta planer explicitly take account of the potential testing performance into consideration and, hence, alleviate the overfitting phenomena with few training trajectories. Simulation results substantiate the effectiveness of our approach, as well as the advantages of quick adaptation to new initial conditions without overfitting.