While complexity, accuracy, and fluency (CAF) measures are known to correlate with L2 writers’ scores, less is known about the effectiveness of providing automated feedback based on these measures for improving writing performance. Furthermore, it remains unclear if improvements in specific CAF measures correspond to improvements in human-rater scores. Finally, the trade-off hypothesis predicts that all three components of CAF cannot be improved at once, so providing multiple CAF measures to students at the same time might cause cognitive overload and overwhelm students, reducing their ability to uptake the feedback. To examine these issues, a simple paragraph feedback tool was developed to provide input on number of words, vocabulary variety, supporting detail markers, and sentence length. The tool was implemented repeatedly in L2 academic paragraph writing lessons with 124 students. Improvement was assessed through complexity, accuracy, and fluency (CAF) measures and human-rater scores, while also gathering student opinions. Results showed significant improvements in number of words, sentence length, and supporting detail marker usage from pre- to post-treatment with improvements in number of words and vocabulary variety found to be the strongest predictors of improvements in human-rater scores. Students reported finding vocabulary variety feedback most helpful, while supporting detail marker feedback was perceived as most confusing. Notably, students reported very low cognitive load overall. This article also discusses pedagogical implications of these findings for L2 writing instruction and automated feedback implementation.