Abstract

An assessment consists of questions addressing the required learning outcomes of a course. If a pool of questions of various types is made available then assessment design reduces to selection of questions, one by one, from the pool. Since the number of possible questions for a course may be quite large, and several preferences have to be matched, manual selection of a suitable question is not possible. This paper presents an enhanced implementation of a previously presented idea of a methodology for assessment design with an application to a course of Hydraulics with an initial pool of 1,000 questions. Each question is tagged with a set of attributes. The rules are generated by the expert system itself. The idea of a score of relevance has been introduced. The enhanced implementation displays a set of questions with their relevance scores rather than a single question to let the instructor choose from them. An instance of MS SQL Server at Azure database is used for the web-based cloud implementation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.