The path planning of stratospheric airships holds significant potential for high altitude platform station applications. However, current research is confronted with certain limitations. On the one hand, the existing approaches are predominantly manual due to complex scenarios and numerous factors to consider, which are inefficient and lacks the ability to anticipate the dynamic wind field. On the other hand, the path planning approaches employed by other aircrafts rarely account for the dynamic wind field and energy cycle characteristics, rendering them unsuitable for airships. To address these challenges, this paper proposes a novel path planning approach based on deep reinforcement learning. Firstly, according to the characteristics of airship, the state space and the reward function are well-designed. Secondly, heterogeneous data, including dynamic wind field data and airship state features, are fused as state input for the model. Further, the dueling double deep recurrent Q network is used for path planning model. In contrast to the model employing a double deep recurrent Q network and heuristic algorithm, our proposed approach maintains higher energy and higher success rate, empowering airships to adeptly navigate emergencies.