The dependency of low-dimensional embedding to principal component space seriously limits the effectiveness of existing robust principal component analysis (PCA) algorithms. Simply projecting the original sample coordinates onto orthogonal principal component directions may not effectively address various noise-corrupted scenarios, impairing both discriminability and recoverability. Our method addresses this issue through a generalized PCA (GPCA), which optimizes regression bias rather than sample mean, leading to more adaptable properties. And, we propose a robust GPCA model with joint loss and regularization based on the ℓ2,μ norm and ℓ2,ν norms, respectively. This approach not only mitigates sensitivity to outliers but also enhances feature extraction and selection flexibility. Additionally, we introduce a truncated and reweighted loss strategy, where truncation eliminates severely deviated outliers, and reweighting prioritizes the remaining samples. These innovations collectively improve the GPCA model’s performance. To solve the proposed model, we propose a non-greedy iterative algorithm and theoretically guarantee the convergence. Experimental results demonstrate that the proposed GPCA model outperforms the previous robust PCA models in both recoverability and discrimination.