Abstract
ABSTRACT Data-driven model modification plays an important role for a statistical methodology to advance the understanding of subjective matters. However, when the sample size is not sufficiently large model modification using the Lagrange multiplier (LM) test has been found not performing well due to capitalization on chance. With the recent development of lasso regression in statistical learning, lasso regularization for structural equation modeling (SEM) may seem to be a method that could avoid capitalizing on chance in finding an adequate model. But there is little evidence validating the goodness of lasso SEM. The purpose of this article is to examine the performance of lasso SEM by comparing it against the LM test, aiming to answer the following five questions: (1) Can we trust the results of lasso SEM for model modification? (2) Does the performance of lasso SEM depend more on the effect size or the absolute value of the parameter? (3) Does lasso SEM perform better than the widely used LM test for model modification? (4) Are lasso SEM and LM test affected by nonnormally distributed data in practice? and (5) Do lasso SEM and LM test perform better with robustly transformed data? By addressing these questions with real data, results indicate that lasso SEM is unable to deliver the expected promises, and it does not perform better than the LM test.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Structural Equation Modeling: A Multidisciplinary Journal
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.