The study explores the capabilities of large language models (LLMs), particularly GPT-4, in understanding and solving geotechnical problems, a specialised area that has not been extensively examined in previous research. Employing a question bank obtained from a commonly used textbook in geotechnical engineering, the research assesses GPT-4’s performance across various topics and cognitive complexity levels, utilising different prompting strategies like zero-shot learning, chain-of-thought (CoT) prompting, and custom instructional prompting. The study reveals that while GPT-4 demonstrates significant potential in addressing fundamental geotechnical concepts and problems, its effectiveness varies with specific topics, the complexity of the task, and the prompting strategies employed. The paper categorises errors encountered by GPT-4 into conceptual, grounding, calculation, and model inherent deficiencies related to the interpretation of visual information. Custom instructional prompts, specifically tailored to address GPT-4’s shortcomings, significantly enhance its performance. The study reveals that GPT-4 achieved an overall problem-solving accuracy of 67% with custom instructional prompting, significantly higher than the 28.9% with zero-shot learning and 34% with CoT. However, the study underscores the importance of human oversight in interpreting and verifying GPT-4’s outputs, especially in complex, higher-order cognitive tasks. The findings contribute to understanding the potential and limitations of current LLMs in specialised educational fields, providing insights for educators and researchers in integrating AI tools like GPT-4 into their teaching and problem-solving approaches. The study advocates for a balanced integration of AI in education to enrich educational delivery and experience while emphasising the indispensable role of human expertise alongside technological advancements.
Read full abstract