Abstract

In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the machine learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the deep learning-enabled language models approach, we found a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like Average Stochastic Gradient Descent (ASGD) Weight-Dropped LSTMs (AWD-LSTMs), AWD-Quasi-Recurrent Neural Networks (QRNNs), and Transformer while using transfer learning and different forms of tokenization to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach’s different strengths and weaknesses and what gaps we found to evaluate the language models or to apply them in a real programming context.

Highlights

  • We are digitally surrounded by computational Language Models (LMs) that guide us while writing to reduce user effort, suggest different options for words/sentences to enhance our style, or accurately fix our grammatical errors [1,2,3]

  • To generate the source code, we show the output from AWD-Long Short-Term Memory (LSTM) char, AWD-Quasi-Recurrent Neural Networks (QRNNs) char, and GPT-2

  • This paper compares how different approaches to tokenization models, deep neural network architectures, pre-trained models, and transfer learning affect the results from language models used to generate source code or auto-complete software pieces

Read more

Summary

Introduction

We are digitally surrounded by computational Language Models (LMs) that guide us while writing to reduce user effort, suggest different options for words/sentences to enhance our style, or accurately fix our grammatical errors [1,2,3]. Many of the keys we press while typing on a keyboard act as a part of the inputs to compose new datasets for those models that shape how we communicate with others. Does this happen in the same way when we write code? LMs were used to generate automated source code based on sample code inputs or pseudo-code, and how this generated code performs was evaluated [9,10,11] Another exciting application of NLP to source code languages is the automatic translation between different languages.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call