• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery Chat PDF
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources

Use Of Rubrics Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
325 Articles

Published in last 50 years

Related Topics

  • Summative Assessment
  • Summative Assessment
  • Peer Assessment
  • Peer Assessment
  • Portfolio Assessment
  • Portfolio Assessment
  • Learning Assessment
  • Learning Assessment
  • Assessment Rubrics
  • Assessment Rubrics
  • Teacher Assessment
  • Teacher Assessment
  • Assessment Tasks
  • Assessment Tasks

Articles published on Use Of Rubrics

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
298 Search results
Sort by
Recency
The analytic use of rubrics in writing classes by language students in an EFL context: students’ writing model and benefits

PurposeThis study examines the process of using analytic rubrics in higher education, focusing on students in a writing course for basic users of English, the resources they employ as they use the rubric, and the benefits they perceive from it.MethodThe study involved 13 students (N = 13) enrolled in a basic English writing course. After completing the course, semi-structured interviews and stimulated recall sessions were conducted with participants. The focus was on understanding their use of the analytic rubric in their writing process and the resources they utilized, such as smartphone apps, the Grammarly website, and both mobile and desktop versions of MS Word.FindingsThe results indicated that the students exhibited a writing model, comprising recursive and tangled five steps. Long-term benefits included improved self-efficacy, writing skills, satisfaction, and task-management skills. In the short-term, the rubric helped clarify the requirements, served as a benchmark for their writing, and highlighted their strengths and weaknesses in the writing process.OriginalityThis study contributes to the limited research on the use of analytic rubrics in EFL contexts in higher education, providing insights into the writing process of basic English learners and the benefits of rubric use in language learning.

Read full abstract
  • Journal IconFrontiers in Education
  • Publication Date IconJul 1, 2025
  • Author Icon Nada Fahad Bin Dahmash
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Assessment for learning or assessment for scoring? Washback of analytic rubrics in Vietnamese EFL classrooms

This study investigates the washback effects of EFL lecturers’ use of analytic rubrics for assessing students’ writing performance in Vietnamese higher education. Although analytic rubrics are widely promoted for enhancing assessment transparency and supporting student learning, little is known about how lecturers implement them in practice and how their use influences teaching and learning behaviors. Drawing on Washback Theory and Assessment for Learning principles, this qualitative study involved semi-structured interviews with nine Vietnamese EFL lecturers across two institutions, representing novice, mid-career, and near-end-career stages. Thematic analysis revealed considerable variation in rubric implementation: novice lecturers tended to use rubrics rigidly for scoring, while more experienced lecturers integrated rubrics into instructional processes to support student learning. Lecturers’ perceptions of rubrics evolved over time, with greater teaching experience leading to a more formative and reflective use, although institutional constraints sometimes limited their flexibility. Rubric use often helped clarify writing expectations and supported student self-monitoring, but it sometimes led to formulaic writing when not well integrated into instruction. These findings highlight the dynamic, context-dependent nature of rubric washback and underscore the need for sustained professional development, culturally responsive rubric design, and institutional support to promote the effective use of assessment tools in EFL writing classrooms.

Read full abstract
  • Journal IconLanguage Testing in Asia
  • Publication Date IconJun 19, 2025
  • Author Icon Hoang Yen Phuong + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Using specifications grading in a BSN honors program to promote engagement and competency.

Using specifications grading in a BSN honors program to promote engagement and competency.

Read full abstract
  • Journal IconJournal of professional nursing : official journal of the American Association of Colleges of Nursing
  • Publication Date IconMay 1, 2025
  • Author Icon Allison R Jones + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Using rubrics for formative purposes: identifying factors that may affect the success of rubric implementations

ABSTRACT The formative use of rubrics seems to have the potential of promoting student learning, supposedly by making expectations and criteria explicit. However, there is a variation in effects on how well students perform on academic tasks when supported by rubrics. The aim of this study was to identify factors in rubric interventions that may potentially explain this variation in effects. This was investigated by analysing 15 ‘high-quality studies’ reporting on rubric interventions. The ‘success’ of these studies was ranked, based on the effect size on academic performance from rubric interventions. We performed a content analysis, searching for similarities and differences in procedures and instrumentation. Our analysis revealed two key characteristics common to successful interventions: First, teachers explain both the content of the rubric and its application for formative purposes. Second, an effective sequence involves students writing or producing work, followed by feedback or self-assessment, and subsequent revision.

Read full abstract
  • Journal IconAssessment in Education: Principles, Policy & Practice
  • Publication Date IconMar 4, 2025
  • Author Icon Anders Jonsson + 3
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Comparing Analytic and Mixed-Approach Rubrics for Academic Poster Quality.

Comparing Analytic and Mixed-Approach Rubrics for Academic Poster Quality.

Read full abstract
  • Journal IconAmerican journal of pharmaceutical education
  • Publication Date IconMar 1, 2025
  • Author Icon Michael J Peeters + 1
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Scoring Rubrics Method in Performance Assessment and its effect of Mathematical Achievement

This study aimed to investigate the impact of using scoring rubrics on assessing the performance of students in achievement. The study followed an experimental approach, and the sample consisted of 187 male and female students enrolled in the Calculus course. They were divided into three groups: the first experimental group, whose performance was evaluated using analytical scoring rubrics, the second experimental group, whose performance was evaluated using holistic scoring rubrics, and the control group, whose performance was evaluated using the traditional method. Additionally, a mathematics achievement test was developed, and two of scoring rubrics, one analytical and the other holistic, were prepared to evaluate students' performance. The results of the study favored the use of analytical scoring rubrics over holistic correction rules, as they considered all details, procedures, and levels of understanding and perception. The students expressed satisfaction with the use of analytical and holistic performance scoring rubrics in evaluating their performance. The study recommended the need for students to pay attention to interpreting their procedures when performing mathematical tasks. It also encouraged teachers to use scoring rubrics to evaluate students' performance and called upon curriculum authors to make the necessary modifications and additions to increase students' opportunities for justifying their procedures. Moreover, conducting in-depth studies that allow students to justify their procedures was suggested. Keywords: performance-based assessment, scoring rubrics, achievement, composition and inverse functions, assessment strategies, educational psychology, pedagogical methods.

Read full abstract
  • Journal IconAthens Journal of Education
  • Publication Date IconJan 27, 2025
  • Author Icon Mohammad A Tashtoush + 2
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Rubric for peer evaluation of oral presentations: Use and perceptions among experienced and non-experienced students.

A properly designed rubric for oral presentations should be useful both to assess students' performance and to help them prepare for the task. However, its use and perceptions might be influenced by scholars' previous familiarization with rubrics during pre-university courses. The aim of this study was to evaluate how the previous experience of students in the use of rubrics can influence their assessment of oral presentations and to compare their ratings with those assigned by educators. Eighty-six first-year undergraduate dentistry students were randomly distributed in teams to prepare oral presentations. A newly designed assessment rubric was presented to the students prior to the assignment. Six weeks later the students performed the presentations and were anonymously assessed with the rubric by their peers and seven educators (EDU). Students' perceptions towards the rubric as a learning and assessment tool were registered with an anonymous survey, which also recorded if they were familiar with the use of rubrics (experienced students, ES) or not (not-experienced students, NES). Assigned scores by NES, ES, and EDU were compared. Sixty-seven students completed the survey. No differences were found in the scores assigned among experienced (41) and non-experienced students (26). Educators assigned significantly lower scores than students. ES and NES considered the rubric a complete easy to use and useful tool that helped them feel confident during assessment and performance. Previous experience does not influence students' use and perceptions of the newly developed rubric, however, ratings assigned by students are not comparable to those of EDU.

Read full abstract
  • Journal IconJournal of dental education
  • Publication Date IconJan 17, 2025
  • Author Icon Juan José Pérez-Higueras + 2
Cite IconCite
Chat PDF IconChat PDF
Save

The Role of Formative Assessment in Developing English Language Curriculum and Learning

This research examines the role of formative evaluation in improving curriculum development and the English language learning process. Formative assessment (FA) is a comprehensive evaluation method that provides feedback to teachers and students to improve teaching effectiveness. It helps students understand their limitations and gives teachers the ability to tailor lessons to students' needs. This article uses a systematic literature review approach to collect data from various studies. The results show that formative approaches are effective in increasing students' motivation, engagement and response to learning, as well as facilitating curriculum fulfillment based on students' needs. Formative assessment can be conducted using many strategies, such as questioning, group discussions, presentations, and the use of rubrics, journals, and quizzes, supported by technology to improve efficiency and effectiveness.

Read full abstract
  • Journal IconJOURNAL OF TECHNOLOGY, EDUCATION & TEACHING (J-TECH)
  • Publication Date IconJan 14, 2025
  • Author Icon Nurul Rahayu Ningsih + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Development of a rubric for assessing Multiplicative Thinking in Primary Schools in Zambia

The study focuses on developing a rubric to assess Multiplicative Thinking (MT) in Zambia. Drawing from a quasi experiment with 3rd to 5th grade students from two schools, the study addressed the question: “What rubric can effectively capture the diverse range of students’ responses to multiplication and division problems in Zambia?”. Inductive content analysis was applied to examine, code and categorise students’ responses to tasks involving reading and drawing patterns, commutative property, daily context, and the inverse relation between multiplication and division. The analysis focused on the Multiplicative Structure of responses using the lens of mutual translation between representation modes. The following categories emerged: no answer, incorrect, partially correct, and correct. The rubric aims to bridge the gap between mathematics education research and practical implementation. Additionally, it aligns with Zambian assessment policy, which promotes rubric use for quality assessment.

Read full abstract
  • Journal IconInternational Journal on Emerging Mathematics Education
  • Publication Date IconJan 13, 2025
  • Author Icon Arthur Mungalu
Cite IconCite
Chat PDF IconChat PDF
Save

From Argument to Algorithm: L2 Teachers’ Cognitive Bootstrapping in Validity Argument in Writing Assessment

Focusing efficiently on potential weaknesses in the validity argument of writing assessments—such as writing subjectivity, content coverage, criteria vagueness, and raters’ incompetence—has been shown to positively enhance teachers’ overall writing assessment competence (AC). In this study, we propose a computational bootstrapping model of validity argument in L2 writing assessment and compare it to argument-based models such as Practical Reasoning (PR), Assessment Use Argument (AUA), and Rubric Use Argument (RUA). Specifically, this computational model gradually improves the validity argument by addressing subtle deficiencies in previous assessment competence and constructing a bootstrapping process for the validity argument. We collected data from the Chinese English Teachers’ Writing Assessment Competence Corpus (CETWACC), which includes texts from a total of Chinese L2 teachers in higher education. The corpus comprises six levels and 60 items detailing how these teachers perform in: (i) construction, (ii) reflection, (iii) externalization, (iv) internalization, (v) enhancement, and (vi) reconstruction. The findings suggest that the Cognitive Bootstrapping Model (CBM) significantly enhances teachers’ assessment competence through reasoned arguments and more scientific measures of validity arguments using computational algorithms. Overall, this study emphasizes the computational evidence of validity arguments and explores the subtle process of micro-changes in L2 writing assessment, transitioning from argument-based approaches to algorithmic methods. The results have implications for discussions on the role of validity argument bootstrapping in current writing assessments, offering a universally applicable and operationally feasible model for validating writing assessments.

Read full abstract
  • Journal IconSAGE Open
  • Publication Date IconJan 1, 2025
  • Author Icon Yuguo Ke + 1
Cite IconCite
Chat PDF IconChat PDF
Save

Translation vs. Scaffolding for the Writing Practice

Writing entails one of the most difficult skills to teach in the English as a Foreign Language (EFL) classroom, above all in Primary Education. The main purpose of this paper was to analyse the techniques employed by EFL teachers in the teaching of writing. The study addressed fundamental issues for the development of writing skill, such as approaches to teaching writing, scaffolding and translation, the use of metacognitive strategies, and the type of activities and resources employed by in-service teachers. A quantitative survey was designed and administered online to 47 in-service EFL teachers in Primary Education (20 from bilingual schools and 27 from non-bilingual schools). The results showed that scaffolding was more frequent among the EFL teachers over the translation, even though using the mother tongue was very popular among the respondents’ answers. Besides, EFL teachers from both cohorts pointed out that among the most frequent metacognitive strategies were the suggestions of improvements and the use of checklists or rubrics rather than the organisation of peer reviews in class. The activities that best suit the practice of writing were those in which teachers had more control (familiarisation and controlled writing). To conclude, the respondents were unfamiliar with many of the tools devoted to writing, being more popular the use of more general educative tools such as Canva, Wordwall, or Padlet. Regarding the outcomes, this study depicts the perceptions and the actual implementation of techniques of the EFL teachers in Primary Education, leaving an open door to further analysis in other educative stages to determine if these techniques are confirmed or refuted in other contexts and levels.

Read full abstract
  • Journal IconHikma
  • Publication Date IconDec 20, 2024
  • Author Icon Cristina Castillo + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Teaching and Assessing at scale: the use of objective rubrics and structured feedback

It is widely recognised that feedback is an important part of learning: effective feedback should result in a meaningful change in student behaviour (Morris et al, 2021). However, individual feedback takes time to produce, and for large cohorts – typified by the North of 300 challenge in computing (CPHC, 2019) - it can be difficult to do so in a timely manner. On occasion it seems that many academics lose sight of the purpose of feedback, and instead view it to justify a mark, rather than an opportunity to provide meaningful tuition. One strategy to provide feedback at scale is to share the workload across multiple staff, but this introduces an additional problem in ensuring that the feedback and marking are equitable and consistent. In this paper we present a case study from teaching programming that attempts to address two distinct, but related issues.The first issue is to make feedback more meaningful. We attempt to achieve this by providing detailed feedback on a draft submission of programming coursework allowing students time to make changes to their work prior to the final submission date. We present an analysis of the data generated from this approach, and its potential impact on student behaviour.The second issue is that of scalability. This feedforward approach creates a significant pressure on marking and on the necessity to provide feedback on a draft submission to large numbers of students in good time so that students are able to act upon it. To achieve this we consider an approach based on creating an objective, reusable marking rubric so that the work can be reasonably spread across multiple members of staff. We present an analysis of the data generated from this approach to determine whether we consider the rubric to be objective enough to remove individual interpretations and biases, and where discrepancies exist attempt to determine where those discrepancies arise.This work was carried out through an analysis of impact on student assessment, as well as from the academic staff involved in using the rubrics. Preliminary results from this work show that the more objective rubric used by several did enable a scalable solution for rapid feedback on submissions, and this did indicate some improvement in student outcomes. However, the work also illustrated the problems of subjective interpretations and some variation in outcomes by marker

Read full abstract
  • Journal IconNew Directions in the Teaching of Natural Sciences
  • Publication Date IconDec 18, 2024
  • Author Icon Simon J Grey + 1
Cite IconCite
Chat PDF IconChat PDF
Save

Enhancing Assessment and Feedback in Game Design Programs

The integration of generative AI tools in game design education offers promising ways to streamline the grading, assessment, and feedback processes that are typically labor-intensive. In game design programs, faculty often deal with varied file formats, including 3D models, executable prototypes, videos, and complex game design documents. Traditional methods of assessment and feedback, primarily text-based, struggle to provide timely and actionable insights for students. Furthermore, only a small percentage of top students consistently review and apply feedback, leading to inefficiencies. This article explores how generative AI tools can augment these processes by automating aspects of grading, generating more personalized and meaningful feedback, and addressing the time-intensive nature of reviewing diverse file formats. Key strategies are discussed, including the use of rubrics tailored for AI-based assessment, automated prompts for narrative-driven assignments, and the application of AI in reviewing complex project builds. The objective is to create more time for faculty to engage in live mentoring and hands-on learning activities, which research shows to be more effective. Practical examples of various game design assignments, including build reviews and document evaluations, are provided to illustrate these new approaches. This shift promises to enhance student engagement and improve learning outcomes.

Read full abstract
  • Journal IconIJERI: International Journal of Educational Research and Innovation
  • Publication Date IconDec 3, 2024
  • Author Icon James Hutson + 2
Cite IconCite
Chat PDF IconChat PDF
Save

중급 외국인 학부생 대상 대학 보고서 수업에서의 학습자 중심 루브릭 활용 사례 연구

중급 외국인 학부생 대상 대학 보고서 수업에서의 학습자 중심 루브릭 활용 사례 연구

Read full abstract
  • Journal IconTeaching Korean as a Foreign Language
  • Publication Date IconNov 30, 2024
  • Author Icon Mijung Jang + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Impact of rubrics on students’ self-assessment and overall performance in an EAP writing course

This study investigates the impact of rubric use on students’ self-assessment and overall performance in an English for Academic Purposes (EAP) writing course. Recognizing the complexity of academic writing, which requires substantial guidance and effort, the research examines how rubrics can help align students’ self-assessments with instructor’s and / or program expectations. The study involved six students from diverse cultural and academic backgrounds who participated in multiple rounds of rubric-related assessments and following reflective activities. The findings suggest that consistent use of rubrics can significantly enhance students’ understanding of writing criteria, improve the correlation between student and instructor evaluations, and consequently lead to better writing outcomes. However, certain challenges such as lower proficiency levels, time constraints, or lack of interest / motivation may limit students’ ability to fully comprehend and utilize rubrics effectively while performing a writing task. The research highlights the need for clear and measurable rubric descriptors to support students’ comprehension, additionally to providing multiple writing samples for reference and allowing adequate practice time. These insights contribute to the ongoing discussion on rubric effectiveness in General English Language Teaching (General ELT) and EAP settings, offering practical suggestions how to improve academic writing instruction and bridge the gap between students’ performance and instructor’s and/ or program expectations.

Read full abstract
  • Journal IconGlobal Journal of English Language Teaching
  • Publication Date IconOct 30, 2024
  • Author Icon Tetyana Bidna
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Draw Yourself Doing Mathematics: Creating a Prompt and Rubric to Probe Student Mental Images of Mathematics

ABSTRACT The Draw Yourself Doing Mathematics prompt was developed to study university students’ mental model of their self-image of doing mathematics derived from work connected to student attitudes toward mathematics. Undergraduate students are given 10 minutes to draw, on physical paper, a picture in response to the prompt. A scoring rubric has been developed to analyze the drawings and comprises an overall impression scale (ranging from extremely negative to extremely positive) and three categorical scales (location, activity, and affect). While the prompt was developed with K-16 students in mind, the validity of scores produced by the prompt has only been studied with undergraduate students. Rubric use has proven reliable with faculty and undergraduate researchers; scoring the drawings produced by the prompt allows for several hundred drawings to be produced in a few hours. Scores produced by the rubric help paint a picture of the mental images of doing mathematics over time, after pedagogical treatments, at the beginning and/or end of a course, and as a snapshot in time during the course. The drawings themselves help open conversations between instructors and students about, especially, the emotional aspects of doing mathematics.

Read full abstract
  • Journal IconInvestigations in Mathematics Learning
  • Publication Date IconOct 27, 2024
  • Author Icon Rachel Bachman + 1
Cite IconCite
Chat PDF IconChat PDF
Save

Systematic integration of a rubric in first-year engineering calculus courses

ABSTRACT For many first-year engineering students, their first calculus or pre-calculus courses can be a substantial barrier for initial success and persistence to subsequent engineering courses. One approach that may mitigate this challenge for some students is to support their development as stronger metacognitive learners – learning how to self-evaluate and improve based on informed feedback. A structured mathematical problem-solving rubric designed to offer feedback on how to develop and communicate mathematical solutions to engineering problems can be a helpful tool to support metacognitive development. After a semester of careful and systematic instruction, practice, and use of such a rubric in engineering calculus courses, we found significant improvement in pre-calculus students' metacognitive awareness. A median split analysis showed that for both calculus and pre-calculus courses, the lower metacognitive half of each class improved significantly in their metacognitive abilities, as was the intent for the rubric. Students reported an overwhelmingly positive perception of the value of the rubric for their own learning, and the subset of calculus I students who improved in their sophistication of rubric use had significantly higher (12.5%) final exam scores compared to those who did not improve.

Read full abstract
  • Journal IconInternational Journal of Mathematical Education in Science and Technology
  • Publication Date IconOct 2, 2024
  • Author Icon Patricia A Ralston + 6
Cite IconCite
Chat PDF IconChat PDF
Save

Analytic Rubrics for the Practical Assessment of Students’ Performances in Engineering Survey Course

The use of analytic rubrics as an assessment tool has been an integral component of education. In Surveying courses, it is essential to evaluate the psychomotor skills under simulated conditions of actual practise. The main objective of this study is to determine the comparative and efficacy of practical assessment conducted at the diploma and degree levels in the Civil Engineering programme. In order to achieve the objective, implementing rubrics in the continuous assessment of the Surveying courses to facilitate a profound comprehension of physics, mathematics, and engineering principles needs to be executed. Previously, a practical assessment has been conducted in a traditional method which gauges the student performance according to the time records during certain practical tasks given resulting in inadequate assessment tools that failed to capture the psychomotor component of practical assessments effectively. As for methodology, a comparative analysis was conducted to examine the consistency and usefulness of analytic rubrics in assessing the course and program outcomes of Surveying courses at both diploma and degree levels. The analysis aimed to determine the impact of these rubrics on student performance. This comparison should be conducted in order to prevent redundant actions, as it pertains to diploma level students who will be integrated into the degree program. This assessment reveals that most students met all performance criteria at an acceptable level. This indicates that the analytical rubric used for the Practical Test in Surveying courses is adequate and effective as a grading instrument. As for the results, the comprehensive analysis indicates that a significant majority, ranging from 75% to 81%, exhibited commendable performance in achieving the criteria for various Programme Outcomes (PO), particularly in PO4, PO5, PO9, and PO10.

Read full abstract
  • Journal IconJournal of Sustainable Civil Engineering and Technology
  • Publication Date IconSep 30, 2024
  • Author Icon Adnan Derahman + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Performance of a Large‐Language Model in scoring construction management capstone design projects

Abstract Grading is one of the most relevant hurdles for instructors, diverting instructor's focus on the development of engaging learning activities, class preparation, and attending to students' questions. Institutions and instructors are continuously looking for alternatives to reduce educators' time required on grading, frequently, resulting in hiring teaching assistants whose inexperience and frequent rotation can lead to inconsistent and subjective evaluations. Large Language Models (LLMs) like GPT‐4 may alleviate grading challenges; however, research in this field is limited when dealing with assignments requiring specialized knowledge, complex critical thinking, subjective, and creative. This research investigates whether GPT‐4's scores correlate with human grading in a construction capstone project and how the use of criteria and rubrics in GPT influences this correlation. Projects were graded by two human graders and three training configurations in GPT‐4: no detailed criteria, paraphrased criteria, and explicit rubrics. Each configuration was tested through 10 iterations to evaluate GPT consistency. Results challenge GPT‐4's potential to grade argumentative assignments. GPT‐4's score correlates slightly better (although poor overall) with human evaluations when no additional information is provided, underscoring the poor impact of the specificity of training materials for GPT scoring in this type of assignment. Despite the LLMs' promises, their limitations include variability in consistency and reliance on statistical pattern analysis, which can lead to misleading evaluations along with privacy concerns when handling sensitive student data. Educators must carefully design grading guidelines to harness the full potential of LLMs in academic assessments, balancing AI's efficiency with the need for nuanced human judgment.

Read full abstract
  • Journal IconComputer Applications in Engineering Education
  • Publication Date IconSep 14, 2024
  • Author Icon Gabriel Castelblanco + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Interviewing the Interviewers: How Qualitative Professors Assess Interviews in Their Classes

When reviewing the literature relating to what instructors of qualitative research courses are teaching in their courses, we found a gap regarding qualitative faculty's motivations behind what they teach and whether they use teaching tools such as rubrics in their assessment practices. We utilized general qualitative methods to explore and describe how instructors of qualitative methodology courses conceptualize and design interview-based assessments. By conducting structured interviews with five faculty members, we embraced interpretivist descriptive traditions and sought to obtain thick, rich descriptions of each participant’s perspectives and experiences. After coding interview transcripts, we created themes to represent participant beliefs that (a) interview assignments are the most appropriate and comprehensive way to assess knowledge of qualitative research, (b) the design and/or use of rubrics for these assignments is complicated within qualitative coursework, and finally, that (c) thorough feedback is the most important aspect of providing high-quality instruction in qualitative research methods. We then summarize our findings and argue that with creative application and further research, qualitative faculty can benefit from the use rubrics to assess interview assignments in qualitative methods courses.

Read full abstract
  • Journal IconThe Qualitative Report
  • Publication Date IconSep 9, 2024
  • Author Icon Meredith Massey + 1
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers