Abstract

IntroductionCourse evaluations are an important tool for students to provide feedback on the structure of a course, the effectiveness of the instructor, and the success of their learning. However, since most course evaluations use the Likert scale, it is often difficult to accurately capture the diversity of student experiences and specific areas for course improvement. An alternative approach to course evaluations is Q‐methodology which asks students to rank salient opinion statements relative to each other into a forced, normal distribution rather than independently on a Likert scale. The clustering of opinions among students performed by Q‐methodology analysis highlights groups of shared opinions, values and preferences which are useful in understanding prevalent student perspectives and driving course reform. Previous work in our lab has employed Q‐methodology to assess interprofessional education, and pathoanatomy courses.AimsIn this study, we will use Q‐methodology to assess a large‐scale (class size = 850) anatomy and physiology course across five disciplines in order to 1) validate the Q‐method assessment across different populations experiencing the same course, 2) determine which laboratory experience is most appropriate for students, and 3) evaluate the equivalency of the course experience across disciplines.MethodsStudents across five disciplines (midwifery, nursing, engineering, iBioMed, and health sciences) enrolled in 1st, 2nd, and 3rd year Anatomy and Physiology will be recruited into this study. Critically, while all students experience the same lecture, the tutorial portion of the course is varied to best suit disciplinary needs. For example, the midwifery tutorial assignment is focused on integrating anatomy and physiology knowledge with the presentation of midwifery‐related information to the general public, while engineering tutorials focus on applying biomedical engineering to the systems of the body. A Q sample consisting of approximately 40 statements will be generated from past course feedback, previous Q method studies and relevant literature. Participants will be asked to rank Q sample statements relative to each other in the second term of the course using an online “Q‐sort” platform. After data collection, a by‐person factor analysis will be completed using the qfactor program in STATA to uncover prevalent opinions within the cohort. Data will be considered across disciplines and with reference to participant demographics. This protocol has been approved by the McMaster Undergraduate Research Ethics Board and is accordance with the Declaration of Helsinki.Anticipated SignificanceThe results of this study look to extend previous Q‐methodology work in terms of validating this Q Method practice across populations. For this course, discrepancies between tutorial experiences and best practices to support course reform will be explored.Support or Funding InformationThis study was supported by the Education Program in Anatomy at McMaster University.This abstract is from the Experimental Biology 2019 Meeting. There is no full text article associated with this abstract published in The FASEB Journal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call