Abstract Objective: Determine the extent to which a group of embedded performance validity tests (PVT) predict performance on a stand-alone test; and examine the classification accuracy of embedded PVTs. Method: Participants were 102 adult patients who underwent neuropsychological evaluation (M age = 31.85, SD = 11.79; M education = 13.82, SD = 2.50; M FSIQ = 104.72). Patients were administered the Medical Symptom Validity Test (MSVT), and 10 embedded PVT scores were extracted from their neuropsychological profiles. Logistic regression determined the extent to which scores on embedded PVTs predicted MSVT designation as credible or noncredible. Classification accuracy also was calculated for the embedded PVTs, both individually and based on number of “failures.” Results: A final regression model (Chi-square (6) = 20.06, p = .003, Nagelkerke R Square = .42) was 38% sensitive, 97% specific, and correctly predicted credible patients (negative predictive accuracy = 95%) more often than noncredible patients (positive predictive accuracy = 50%). Several embedded PVTs had comparable classification accuracy. Conservatively defining noncredible performance as failure on at least three embedded PVTs produced the best combination of sensitivity (50%) and specificity (86%), and was associated with a false positive rate of 14%. Conclusions: Embedded PVTs correctly predict credible and noncredible performance on the MSVT at a high level. Although a multivariate predictive approach may have slightly more utility than examining embedded PVT scores separately, several individual PVTs have comparable accuracy. It remains unclear which approach is superior, and more research is needed.