Abstract

We show that linear-time self-interpretation of the pure untyped lambda calculus is possible, in the sense that interpretation has a constant overhead compared to direct execution under various execution models. The present paper shows this result for reduction to weak head normal form under call-by-name, call-by-value and call-by-need. We use a self-interpreter based on previous work on self-interpretation and partial evaluation of the pure untyped lambda calculus. We use operational semantics to define each reduction strategy. For each of these we show a simulation lemma that states that each inference step in the evaluation of a term by the operational semantics is simulated by a sequence of steps in evaluation of the self-interpreter applied to the term (using the same operational semantics). By assigning costs to the inference rules in the operational semantics, we can compare the cost of normal evaluation and self-interpretation. Three different cost-measures are used: number of beta-reductions, cost of a substitution-based implementation (similar to graph reduction) and cost of an environment-based implementation. For call-by-need we use a non-deterministic semantics, which simplifies the proof considerably.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.