Subspace learning has shown a tremendous potential in the fields of machine learning and computer vision due to its effectiveness. Subspace representation is a key subspace learning method that encodes subspace membership information. To effectively encode the subspace memberships of data, some structured prior constraints are imposed on the subspace representation, such as low-rank, sparse, and so on. To handle various noises, existing methods tend to separate a specific type of noise using a specific way to obtain robust subspace representation. When encountering diversified noises, their subspace-preserving property may not be guaranteed. To address this issue, we propose a novel unsupervised robust discriminative subspace representation to mitigate the impacts of diversified noises via discriminative approximate isometric embedding, rather than directly separating noises from the high-dimensional space, as done like the existing methods. To ensure the performance of our approach, we provide a crucial theorem, termed as noisy Johnson–Lindenstrauss theorem. Meanwhile, Laplacian rank constraint is imposed on the discriminative subspace representation to uncover the ground truth subspace memberships of noisy data and improve the graph connectivity of subspaces. Extensive experiments on several benchmark datasets and two large-scale datasets validate the effectiveness and robustness of our approach with respect to diversified noises.