This study investigates the potential of using ChatGPT, a large language model, to assess students' critical thinking in online peer feedback. With the rapid development of technology, big language models, such as ChatGPT, have made significant progress in natural language processing in recent years and have good potential for application in teaching evaluation and feedback. However, can generative AI help educational practitioners in teaching and learning? How to accurately assess students' critical thinking using generative AI remains a challenging task. This study investigates whether ChatGPT can effectively evaluate critical thinking using established coding systems. By comparing the consistency and accuracy of manual coding with ChatGPT coding in online peer feedback texts, it clarifies how ChatGPT processes online peer feedback data and conducts assessments. Through a comprehensive analysis employing various metrics including precision, recall, F1 score, and a confusion matrix, we assess ChatGPT's performance. Additionally, we group students and analyze how ChatGPT's assessments relate to their critical thinking levels. Our findings suggest that the ChatGPT demonstrated some ability to assess higher dimensions of critical thinking, but showed limitations in assessing the more granular secondary dimensions under the higher dimensions of critical thinking. However for this kind of granular assessment will more accurately capture the level of learning critical thinking. Surprisingly, ChatGPT's evaluations are not influenced by students' critical thinking levels. This study underscores ChatGPT's potential in automating critical thinking assessment at scale, alleviating the burden on educators and enhancing understanding of critical thinking in peer feedback.