Abstract The introduction of Transformer models has significantly advanced natural language processing, with the development of Large Language Models (LLMs) like GPT-4 and Gemini revolutionizing industries by automating programming tasks. However, challenges remain regarding their ability to fully replace human programmers, especially in terms of efficiency and handling complex problems. This study aims to evaluate the performance of GPT models in solving algorithmic problems across three programming languages—Python, Java, and C++. It focuses on assessing runtime and memory efficiency to provide insights into the capabilities and limitations of LLMs in practical programming tasks. We selected 15 LeetCode problems categorized by difficulty and instructed GPT-3.5, GPT-4, and GPT-4o to generate solutions in Python, Java, and C++. Code was generated and executed 10 times for each problem, measuring runtime and memory usage. Statistical analyses, including two-way ANOVA and post hoc Tukey’s HSD tests, were conducted to evaluate the results. The findings indicate that programming language has a significant effect on memory and runtime efficiency, with C++ outperforming Python and Java. However, there were no statistically significant differences in performance between GPT-3.5, GPT-4, and GPT-4o across most tasks. Python was found to be significantly slower and more memory-intensive compared to C++ and Java. While GPT models show promise in assisting with programming tasks, their practical utility remains limited, particularly for complex problems. Improvements in newer GPT models do not always translate into significant performance gains. The choice of programming language plays a crucial role in optimizing LLM-generated code, suggesting that LLMs are better suited for augmenting human programmers rather than replacing them in critical tasks.
Read full abstract