Abstract

This paper mainly studied the automatic test paper generation and scoring problems in university English exams. Firstly, an automatic test paper generation model was established. Then, an improved genetic algorithm (IGA) was designed for intelligent test paper generation, and it was also used to automatically score answers to Chinese-to-English translation questions in terms of syntax and semantics. It was found that compared with the traditional GA and particle swarm optimization algorithm, the IGA method was faster in generating test papers, with an average generation time of 25 s, and had a higher success rate (94%), suggesting higher validity, and the difficulty and differentiation degrees of the test papers were closer to the preset values. The results of automatic scoring also had a correlation of more than 0.8 with the manual scoring results. The results prove the effectiveness of the automatic test paper generation and scoring method. It can be promoted and applied in practice to enhance the security and fairness of large-scale English exams, as well as achieve objectivity and consistency in scoring.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.