To overcome the restriction of identical distribution assumption, invariant representation learning for unsupervised domain adaptation (UDA) has made significant advances in computer vision and pattern recognition communities. In UDA scenario, the training and test data belong to different domains while the task model is learned to be invariant. Recently, empirical connections between transferability and discriminability have received increasing attention, which is the key to understand the invariant representations. However, theoretical study of these abilities and in-depth analysis of the learned feature structures are unexplored yet. In this work, we systematically analyze the essentials of transferability and discriminability from the geometric perspective. Our theoretical results provide insights into understanding the co-regularization relation and prove the possibility of learning these abilities. From methodology aspect, the abilities are formulated as geometric properties between domain/cluster subspaces (i.e., orthogonality and equivalence) and characterized as the relation between the norms/ranks of multiple matrices. Two optimization-friendly learning principles are derived, which also ensure some intuitive explanations. Moreover, a feasible range for the co-regularization parameters is deduced to balance the learning of geometric structures. Based on the theoretical results, a geometry-oriented model is proposed for enhancing the transferability and discriminability via nuclear norm optimization. Extensive experiment results validate the effectiveness of the proposed model in empirical applications, and verify that the geometric abilities can be sufficiently learned in the derived feasible range.