In the standard kernel partial least squares (KPLS), the mapped data in the feature space need to be centralized before extraction of new score vectors. However, each vector of the centralized variables is often uniformly distributed, and some original features that can reflect the contribution of each variable to fault diagnosis might be lost. As a result, it might lead to misleading interpretations of the principal components and to increasing the false alarm rate for fault detection. To cope with these difficulties, a novel data-driven framework using KPLS based on an optimal preference matrix (OPM) is presented in this paper. In fault monitoring, an OPM is proposed to change the distribution of the variable and to readjust the eigenvalues of the covariance matrix. To obtain the OPM, the objective function can be determined in terms of the squared prediction error and Hotelling's T-squared (T <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> ) statistics. Two optimization algorithms, genetic algorithm and particle swarm optimization algorithm, are extended to maximize effectiveness of the OPM. Compared with traditional methods, the proposed method can overcome the drawback of original features loss of the centralized mapped data in the feature subspace and improve the accuracy of fault diagnosis. Also, few extra computation costs are needed in fault detection. Extensive experimental results on both the Tennessee Eastman benchmark process and the case study of the aluminum electrolytic production process give credible fault diagnosis.