The task of cross-view image geo-localization aims to determine the geo-location (Global Positioning System (GPS) coordinates) of a query ground-view image by matching the image with GPS-tagged aerial (or satellite) images in the reference dataset. Due to the dramatic domain gap between the ground and aerial images, the problem is challenging. The existing approaches mainly adopt convolutional neural networks (CNNs) to learn discriminative features. However, these CNN-based methods mainly leverage appearance and semantic information but fail to jointly model the appearance, positional, and orientation properties of scene objects, which belong to the spatial hierarchy. Since spatial hierarchy information is crucial for cross-view feature correspondence, in this article, we propose an end-to-end network architecture, dubbed GeoNet. GeoNet consists of a ResNetX module and a GeoCaps module. On the one hand, the ResNetX module is developed to learn powerful intermediate feature maps and allows the stable propagation of gradients in deep CNNs. On the other hand, the GeoCaps module utilizes the capsule network to encapsulate the intermediate feature maps into several capsules, whose length and orientation represent the existence probability and spatial hierarchy information of scene objects, respectively. Moreover, by using a dynamic routing-by-agreement mechanism, the GeoCaps module is capable of modeling parts-to-whole relationships between scene objects, which is viewpoint invariant and capable of bridging the cross-view domain gap. In addition to GeoNet, we introduce a simple yet effective metric learning method, based on which two weighted soft margin loss functions with online batch hard sample mining are devised. These functions not only speed up convergence but also improve the generalization ability of the network. Extensive experiments on three well-known datasets demonstrate that our GeoNet not only achieves state-of-the-art results for the ground-to-aerial and aerial-to-ground geo-localization tasks but also outperforms competing approaches for the few-shot geo-localization task.