Abstract

Synthesizing intended programs from user-specified input-output examples, also known as Programming by Examples (PBE), is a challenging problem in program synthesis, and has been applied to a wide range of domains. A key challenge in PBE is to efficiently discover a user-intended program in the search space that can be exponentially large. In this work, we propose a method for automatic synthesis of functional programs on list manipulation, by using offline-trained Recurrent Neural Network (RNN) models to guide the program search. We adopt miniKanren, an embedded domain-specific language for flexible relational programming, as an underlying top-down deductive search engine of candidate programs that are consistent with input-output examples. Our approach targets an easy and effective integration of deep learning techniques in making better PBE systems and combines two technical ideas on generating diverse training dataset and designing rich feature embeddings of probable subproblems for synthesis generated by deductive search. The offline-learned model is then used in PBE to guide the top-down deductive search with specific strategies. To practically manipulate data structures of lists, our method synthesizes functional programs with popular higher-order combinators including $\texttt {map}$ , $\texttt {foldl}$ and $\texttt {foldr}$ . We have implemented our method and evaluated it with challenging program synthesis tasks on list manipulation. The experiments show promising results on the performance of our method compared to related state-of-the-art inductive synthesizers.

Highlights

  • Program synthesis aims to automatically synthesize a program that satisfies high-level specifications

  • Programming by Examples (PBE) often assumes a small number of input-output examples, samples used in training the statistic model is the large set of subproblems for synthesis produced in the deductive search

  • We present a neural-guided method for automatic synthesis of functional programs on list manipulation, where Recurrent Neural Network (RNN) models are trained offline and used to guide the program search in online evaluation

Read more

Summary

INTRODUCTION

Program synthesis aims to automatically synthesize a program that satisfies high-level specifications. PBE often assumes a small number of input-output examples, samples used in training the statistic model is the large set of subproblems for synthesis produced in the deductive search. Despite of benefiting from both correctness of symbolic search and tactics of deep learning, the approach is essentially data-driven and requires a large number of training dataset for the sake of predictive accuracy As statistical models, they cannot guarantee that the synthesized programs are user-desired, especially given very few user-specified examples. When restricted to synthesis of non-recursive functional programs, our method outperforms the aforementioned three inductive synthesizers on some tasks like dropLast and bringToFront with which all these methods perform poorly It came to the similar observation by empirical study in [12].

BACKGROUND
EXPERIMENTS AND RESULTS
RELATED WORK
CONCLUDING REMARKS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call