Abstract

One of the reasons why students go to counseling is being called on based on self-reported health survey results. However, there is no concordant standard for such calls. This study aims to develop a machine learning (ML) model to predict students' mental health problems in 1 year and the following year using the health survey's content and answering time (response time, response time stamp, and answer date). Data were obtained from the responses of 3561 (62.58%) of 5690 undergraduate students from University A in Japan (a national university) who completed the health survey in 2020 and 2021. We performed 2 analyses; in analysis 1, a mental health problem in 2020 was predicted from demographics, answers for the health survey, and answering time in the same year, and in analysis 2, a mental health problem in 2021 was predicted from the same input variables as in analysis 1. We compared the results from different ML models, such as logistic regression, elastic net, random forest, XGBoost, and LightGBM. The results with and without answering time conditions were compared using the adopted model. On the basis of the comparison of the models, we adopted the LightGBM model. In this model, both analyses and conditions achieved adequate performance (eg, Matthews correlation coefficient [MCC] of with answering time condition in analysis 1 was 0.970 and MCC of without answering time condition in analysis 1 was 0.976; MCC of with answering time condition in analysis 2 was 0.986 and that of without answering time condition in analysis 2 was 0.971). In both analyses and in both conditions, the response to the questions about campus life (eg, anxiety and future) had the highest impact (Gain 0.131-0.216; Shapley additive explanations 0.018-0.028). Shapley additive explanations of 5 to 6 input variables from questions about campus life were included in the top 10. In contrast to our expectation, the inclusion of answering time-related variables did not exhibit substantial improvement in the prediction of students' mental health problems. However, certain variables generated based on the answering time are apparently helpful in improving the prediction and affecting the prediction probability. These results demonstrate the possibility of predicting mental health across years using health survey data. Demographic and behavioral data, including answering time, were effective as well as self-rating items. This model demonstrates the possibility of synergistically using the characteristics of health surveys and advantages of ML. These findings can improve health survey items and calling criteria.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call