Previous versions of sparse principal component analysis (PCA) have presumed that the eigen-basis (a p × k matrix) is approximately sparse. We propose a method that presumes the p × k matrix becomes approximately sparse after a k × k rotation. The simplest version of the algorithm initializes with the leading k principal components. Then, the principal components are rotated with an k × k orthogonal rotation to make them approximately sparse. Finally, soft-thresholding is applied to the rotated principal components. This approach differs from prior approaches because it uses an orthogonal rotation to approximate a sparse basis. One consequence is that a sparse component need not to be a leading eigenvector, but rather a mixture of them. In this way, we propose a new (rotated) basis for sparse PCA. In addition, our approach avoids “deflation” and multiple tuning parameters required for that. Our sparse PCA framework is versatile; for example, it extends naturally to a two-way analysis of a data matrix for simultaneous dimensionality reduction of rows and columns. We provide evidence showing that for the same level of sparsity, the proposed sparse PCA method is more stable and can explain more variance compared to alternative methods. Through three applications—sparse coding of images, analysis of transcriptome sequencing data, and large-scale clustering of social networks, we demonstrate the modern usefulness of sparse PCA in exploring multivariate data. An R package, epca, and the supplementary materials for this article are available online.