Although deep reinforcement learning (DRL) has been widely used in robotic mapless navigation tasks, most of the current research focuses on structured environments such as indoor or maze scenes, and little is targeted on outdoor environments. Unlike indoor environments and mazes, outdoor fields tend to be unstructured with complex landforms and sparse rewards for robotic navigation. The performance of most DRL-based strategies is directly affected by the design of the reward function, which greatly deteriorates its generalizability to outdoor environments. To this end, here we propose a two-stage learning paradigm based on skill discovery and hierarchical reinforcement learning (HRL) to cope with this challenge. Specifically, we implement skill discovery through a pre-training stage to acquire diverse skills with terrain-adaptive exploration strategies; then we select multiple skills using HRL to cope with more complex scenarios. We carry out the robotic multi-terrain traverse task based on a high-fidelity robotic simulation platform, Webots, and implement extensive comparative experiments and ablation studies to demonstrate the effectiveness of our approach.