Abstract

This paper evaluates the performance variances of Latent Semantic Analysis (LSA) and Probability Latent Semantic Analysis (PLSA) by judging essay text qualities as automated essay (AES) scoring tools. A correlation research design was used to examine the correlation between LSA performance and PLSA performance. We introduced 3 weight methods and performed 6 experiments to produce the scoring performances of both LSA and PLSA from a total of 2444 Chinese essays. The results show that there were strong correlations between the LSA scores and PLSA scores. While the overall performance of PLSA is better than that of LSA, the findings from the current study do not corroborate the previous findings for PLSA methods that claim a significant improvement. The implications of our research for AES reveal that both LSA and PLSA have a limited capability at this point and those more reliable measures for automated essay analyzing and scoring, such as text formats and forms, still need to be a component of text quality analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call