Abstract

The present study aims to understand how syntactic knowledge is assessed in exams and to compare the level of difficulty between comparable items, from the point of view of the object of assessment. The corpus includes 66 grammar assessment items, taken from the national final exams of secondary education, carried out between 2010 and 2017. The analysis focused on two indicators: the structural / syntactic pattern of instruction setting out the task to be performed by students and the metalinguistic operations required to respond to the instruction. The results describe the existence of different levels of difficulty in assessment items that have the same learning object. The study proves that similarly formulated instructions do not always match the same level of difficulty, nor are the same learnings assessed. It also proves that the tasks required from students involve metalinguistic operations of greater complexity than comprehension.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.