Abstract

Computer programming has emerged in research, industry, and everyday life as a general-purpose problem-solving tool. From this expansion, there has been a steady rise in demand for tools that can help programmers be more productive or make programming easier using auto-completion and code-generation tools. Hence, the development of Artificial Intelligence (AI) systems capable of successfully understanding and generating code has the potential to change these tools and the way humans interact with them. Systems that produce code are not only beneficial but also serve as stepping-stones toward a better understanding of AI and how it relates to programming. One benchmark that is necessary for completing this development, is solving competitive programming (CP) problems which require a mastery of algorithms and complicated natural language, proves to be difficult for any AI model because it requires thinking like a human brain. In this study, the authors conducted a Systematic Literature Review (SLR) to better understand the evolution of program synthesis and also how different models such as AlphaCode and Codex perform when solving CP problems. In general, the authors find that even with the models that are specifically trained for solving CP problems, they still cannot think like a human when solving those problems. Thus, AI still has a long way to go before competing at the highest level of CP. From this literature review, it can be concluded that the code auto-completion and code-generation tools that are available now still do not meet the necessary benchmark which is solving CP tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call