Abstract

Semantics preservation between source and target program is the commonly accepted minimum requirement to be ensured by compilers. It is the key term compiler verification and optimization are centered around. The precise meaning, however, is often only implicit. As a rule of thumb, verification tends to interpret semantics preservation in a very tight sense, not only but also to simplify the verification task. Optimization generally prefers a more liberal view in order to enable more powerful transformations otherwise excluded. The surveyor's rod of admissibility is semantics preservation, and hence the language semantics. But the adequate interpretation varies fluently with the application context (“stand-alone” programs, communicating systems, reactive systems, etc.).The aim of the workshop is to bring together researchers and practitioners working on optimizing and verifying compilation as well as on programming language design and semantics in order to plumb the mutual impact of these fields on each other, the degrees of freedom optimizers and verifiers have, to bridge the gap between the communities, and to stimulate synergies.The accepted papers discuss topics such as certifying compilation, verifying compilation, translation validation, and optimization. Chakravarty et al. present correctness proofs for constant-folding and dead code elimination based on SSA. Hartmann et al. discuss a method to annotate SafeTSA code in order to enable object resolution for dynamic objects under certain conditions. Their approach statically analyzes classes in order to determine if object resolution is possible during runtime. Berghofer and Strecker describe the mechanical verification of a compiler from a small subset of Java to JVM using Isabelle. The contribution of Alias and Barthou is concerned with algorithm recognition. It presents a preliminary approach for detecting whether an algorithm, i.e. a piece of code, is an instance of a more general algorithm template. Their approach relies on first transforming the piece of code under consideration into a system of affine recurrent equations (SARE) and then checking whether it is an instance of a SARE template. Glesner and Blech formalize the notion of computer arithmetic and develop a classification of such arithmetics. Based on this classification they prove the correctness of constant folding which isn't as obvious as it seems at first glance. Genet et al. prove the correctness of a converter from ordinary java class files to CAP (CAP is the class file format of Java Card). The proofs are conducted using the PVS theorem proving system. Hoflehner, Lavery and Sehr discuss validation techniques from Intel's IA64 compiler effort. They show how to improve the reliability of both source code and compilers themselves by means of appropriate validation and self-validation techniques.The papers in this volume were reviewed by the program committee consisting, besides the editors, of •www.ics.uci.edu/~franz/Michael Franz, University of California, Irvine, CA, USA•www-2.cs.cmu.edu/~petel/index.htmlPeter Lee, Carnegie Mellon University, PA, USA•research.microsoft.com/~emeijer/Erik Meijer, Microsoft Research, Redmond, WA, USA•web.comlab.ox.ac.uk/oucl/people/oege.demoor.htmlOege de Moor, Oxford University, UK•Robert Morgan, www.datapower.com DataPower, Cambridge, MA, USA•www.cs.pitt.edu/~soffa/Mary Lou Soffa, University of Pittsburgh, PA, USAWe are grateful to the following persons, whose help has been crucial for the success of COCV'03: Damian Niwinski and the organizers of ETAPS'2003 for their help with the organization of the Workshop as satellite event of ETAPS'2003; Mike Mislove, one of the Managing Editors of the ENTCS series, for his assistance with the use of the ENTCS style files.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call