Pseudocode can efficiently represent algorithm logic, but manual conversion to executable code requires more time. Recent works have applied autoregressive (AR) models to automate pseudocode-to-code conversion, achieving good results but slow generation speed. Non-autoregressive (NAR) models offer the advantage of parallel generation. However, they face challenges in effectively capturing contextual information, leading to a potential degradation in the quality of the generated output. This paper presents an improved NAR model for balancing quality and efficiency in pseudocode conversion. Firstly, two strategies are proposed to address out-of-vocabulary and repetition problems. Secondly, an improved NAR model is built using linear smoothing and adaptive techniques in the transition matrix, which can mitigate the “winner takes all” effect. Finally, a new synthesis potential metric is proposed for evaluating pseudocode conversion. Experimental results show that the proposed method matches AR model performance while accelerating generation over 10-fold. Further, the proposed NAR model reduces the gap with the AR model in terms of the BLEU score on the EN-DE and DE-EN tasks of the WMT14 machine translation.
Read full abstract