Face recognition has achieved remarkable success owing to the development of deep learning. However, most of existing face recognition models perform poorly against pose variations. We argue that, it is primarily caused by pose-based long-tailed data - imbalanced distribution of training samples between profile faces and near-frontal faces. Additionally, self-occlusion and nonlinear warping of facial textures caused by large pose variations also increase the difficulty in learning discriminative features of profile faces. In this study, we propose a novel framework called Symmetrical Siamese Network (SSN), which can simultaneously overcome the limitation of pose-based long-tailed data and pose-invariant features learning. Specifically, two sub-modules are proposed in the SSN, i.e., Feature-Consistence Learning sub-Net (FCLN) and Identity-Consistence Learning sub-Net (ICLN). For FCLN, the inputs are all face images on training dataset. Inspired by the contrastive learning, we simulate pose variations of faces and constrain the model to focus on the consistent areas between the original face image and its corresponding virtual pose face images. For ICLN, only profile images are used as inputs, and we propose to adopt Identity Consistence Loss to minimize the intra-class feature variation across different poses. The collaborative learning of two sub-modules guarantees that the parameters of network are updated in a relatively equal probability between near-frontal face images and profile images, so that the pose-based long-tailed problem can be effectively addressed. The proposed SSN shows comparable results over the state-of-the-art methods on several public datasets. In this study, LightCNN is selected as the backbone of SSN, and existing popular networks also can be used into our framework for pose-robust face recognition.