Researchers have sought for decades to automate holistic essay scoring. Over the years, these programs have improved significantly. However, accuracy requires significant amounts of training on human-scored texts—reducing the expediency and usefulness of such programs for routine uses by teachers across the nation on non-standardized prompts. This study analyzes the output of multiple versions of ChatGPT scoring of secondary student essays from three extant corpora and compares it to quality human ratings. We find that the current iteration of ChatGPT scoring is not statistically significantly different from human scoring; substantial agreement with humans is achievable and may be sufficient for low-stakes, formative assessment purposes. However, as large language models evolve additional research will be needed to continue to assess their aptitude for this task as well as determine whether their proximity to human scoring can be improved through prompting or training.