BackgroundMemory decline is the earliest symptom of various neurodegenerative disease, such as Alzheimer's disease (AD). However, accurately the prediction and identification of risk factors leading up to memory decline has remained limited. ObjectiveThe objective of this study is to create and verify a machine learning model that can accurately predict risk factors for memory decline among US adults. MethodsA total of 9971 individuals were enrolled from the National Health and Nutrition Examination Survey (NHANES) 2015–2016 database. The least absolute shrinkage and selection operator (LASSO) was used to screen for characteristic predictors. Five machine learning (ML) algorithms: including Logistic Regression, ExtraTrees classifier, Bagging classifier, eXtreme Gradient Boosting (XGBoost), and Random Forest (RF) were employed. The performance of each model was evaluated by confusion matrix, area under curve (AUC), accuracy, precision, specificity, Recall and F1 scores. ResultsThe ultimate sample comprised 4525 subjects, of whom 7.7 % (N = 347) exhibited memory deterioration. The ExtraTrees classifier model and the XGBoost model demonstrated superior prediction performance and clinical value compared to other independent machine learning models, based on the AUC value of 0.915 and 0.911. Additionally, they consistently demonstrated accurate predicting ability for memory decline in the external datasets, with an AUC of 0.851 and 0.843, respectively. ConclusionThe ExtraTrees classifier and the XGBoost models were the two outperformed models in predicting memory decline. Nevertheless, it is necessary to conduct future investigations to confirm the accuracy of our findings.