Convolution on 3D point clouds has been extensively explored in geometric deep learning, but it is far from perfect. Convolution operations on point clouds with the fixed kernel indistinguishably capture correspondences between feature pairs, thereby raising an inherent drawback of limited distinctive feature learning. This paper proposes a novel approach to Sampling Adaptive Kernels from Subspace (SAKS) for graph convolution. It adaptively constructs convolution kernels for different feature correspondences according to the unique coordinate representations under the learned subspace. Associating the subspace design with the deep network is a novel concept, providing different viewpoints on feature learning. Specifically, incomplete orthogonal bases are learned at each convolution layer to span a linear subspace in an elaborately designed manner. Subsequently, adaptive kernels are sampled from the learned subspace via unique coordinates parameterized by feature pairs. Unlike existing adaptive convolution methods in a bruteforce manner, the low-rank property of the subspace reduces the computational complexity of this method. Moreover, we theoretically prove that the proposed SAKS derives the principal components of the kernel distribution, which is similar to principal component analysis under some prior assumptions. Extensive experimental results on point cloud classification and segmentation tasks show that SAKS outperforms state-of-the-arts on various benchmark datasets.