Numerous redundant and irrelevant features are usually contained in high-dimensional data whose presence, if ignored, can bring detrimental effects on the performance of data processing tasks. Unsupervised feature selection (UFS) as a trending topic in a great deal of domains has been greatly concerned, which can effectively filter out redundant and irrelevant features in the unlabeled data. The majority of existing UFS methods mainly focus on building the parameterized model in the original feature space or some certain low-dimensional feature subspace, which cannot fully capture the information of the target space. In this work, we propose a novel UFS method that unifies the explicit feature selective matrix and structured graph into a learning framework. An explicit selection matrix, whose scale, bound of the elements and structure are considered, is tailored for UFS. Furthermore, we take into account the local consistency so that the low dimensional manifold structures can be preserved to a better degree. Instead of using the pre-defined similarity matrix to construct graph, we make use of the way of learning to obtain the adaptive graph. Besides, for the purpose of exploiting the cluster structure of data, we impose a rank constraint on the graph. Notably, both the explicit selection matrix and intrinsic structure are learned in the target feature subspace, and thus the proposed methodology is more straightforward and effective. We present an effective and efficient algorithm for the resulting problem based on alternative optimization strategy, and establish its convergence and computational complexity analysis. The experimental results show that our proposed approach has improved by over 5.28% and 5.79% on average in terms of clustering accuracy (ACC) and Normalized Mutual Information (NMI) compared with comparison methods, which demonstrates its superiority.