Abstract
With the rapid development of technology, automated assessment systems have become an essential tool to facilitate the process of correcting short answers. This research aims to explore ways to improve the accuracy of automated assessment using natural language processing techniques, such as Latent Semantic Analysis (LSA) and Longest Common Sequence (LCS) algorithm, hile highlighting the challenges associated with the scarcity of Arabic language datasets.These methodologies facilitate the assessment of both lexical and semantic congruences between student submissions and benchmark answers. The overarching objective is to establish a scalable and precise grading mechanism that reduces manual evaluation's temporal and subjective dimensions. Notwithstanding significant advancements, obstacles such as the scarcity of Arabic datasets persist as a principal impediment to effective automated grading in languages other than English. This research scrutinizes contemporary strategies within the domain, highlighting the imperative for more sophisticated models and extensive datasets to bolster the precision and adaptability of automated grading frameworks, particularly concerning Arabic textual content. With the rapid development of technology, automated assessment systems have become an essential tool to facilitate the process of correcting short answers. This research aims to explore ways to improve the accuracy of automated assessment using natural language processing techniques, such as Latent Semantic Analysis (LSA) and Longest Common Sequence (LCS) algorithm, hile highlighting the challenges associated with the scarcity of Arabic language datasets.These methodologies facilitate the assessment of both lexical and semantic congruences between student submissions and benchmark answers. The overarching objective is to establish a scalable and precise grading mechanism that reduces manual evaluation's temporal and subjective dimensions. Notwithstanding significant advancements, obstacles such as the scarcity of Arabic datasets persist as a principal impediment to effective automated grading in languages other than English. This research scrutinizes contemporary strategies within the domain, highlighting the imperative for more sophisticated models and extensive datasets to bolster the precision and adaptability of automated grading frameworks, particularly concerning Arabic textual content.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have