Data representation aims at learning an efficient low-dimensional representation, which is always a challenging task in machine learning and computer vision. It can largely improve the performance of specific learning tasks. Unsupervised methods are extensively applied to data representation, which considers the internal connection among data. Most of existing unsupervised models usually use a specific norm to favor certain distributions of the input data, leading to an unsustainable encouraging performance for given learning tasks. In this paper, we propose an efficient data representation method to address large-scale feature representation problems, where the deep random walk of unitary invariance is exploited for learning discriminative features. First, the data representation is formulated as deep random walk problems, where unitarily invariant norms are employed to capture diverse beneficial perspectives hidden in the data. It is embedded into a state transition matrix model, where an arbitrary number of transition steps is available for an accurate affinity evaluation. Second, data representation problems are then transformed as high-order matrix factorization tasks with unitary invariance. Third, a closed-form solution is proved for the formulated data representation problem, which may provide a new perspective for solving high-order matrix factorization problems. Finally, extensive comparative experiments are conducted in publicly available real-world data sets. In addition, experimental results demonstrate that the proposed method achieves better performance than other compared state-of-the-art approaches in terms of data clustering.