Abstract
PurposeProgram evaluation stands as an evidence-based process that would allow institutions to document and improve the quality of graduate programs and determine how to respond to growing calls for aligning training models to economic realities. This paper aims to present the current state of evaluation in research-based doctoral programs in STEM fields.Design/methodology/approachTo highlight the recent evaluative processes, the authors restricted the initial literature search to papers published in English between 2008 and 2019. As the authors were motivated by the shift at NIH, this review focuses on STEM programs, though papers on broader evaluation efforts were included as long as STEM-specific results could be identified. In total, 137 papers were included in the final review.FindingsOnly nine papers presented an evaluation of a full program. Instead, papers focused on evaluating individual components of a graduate program, testing small interventions or examining existing national data sets. The review did not find any documents that focused on the continual monitoring of training quality.Originality/valueThis review can serve as a resource, encourage transparency and provide motivation for faculty and administrators to gather and use assessment data to improve training models. By understanding how existing evaluations are conducted and implemented, administrators can apply evidence-based methodologies to ensure the highest quality training to best prepare students.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have