Abstract
ABSTRACTThe use of Monte Carlo methods to generate exam datasets is nowadays a well-established practice among econometrics and statistics examiners all over the world. Its advantages are well known: providing each student a different data set ensures that estimates are actually computed individually, rather than copied from someone sitting nearby. The method however has a major fault: initial “random errors,” such as mistakes in downloading the assigned dataset, might generate downward bias in student evaluation. We propose a set of calibration algorithms, typical of indirect estimation methods, that solve the issue of initial “random errors” and reduce evaluation bias. Ensuring round initial estimates of the parameters for each individual dataset, our calibration procedures allow the students to determine if they have started the exam correctly. When initial estimates are not round numbers, this random error in the initial stage of the exam can be corrected for immediately, thus reducing evaluation bias. The procedure offers the further advantage of rounding markers’ life by allowing them to check round numbers answers only, rather than lists of numbers with many decimal digits1.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Communications in Statistics - Simulation and Computation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.