Published: 9th July 2019|
|Preface Emanuele De Angelis, Grigory Fedyukovich, Nikos Tzevelekos and Mattias Ulbrich|
|Invited Talk: Foundations and Techniques for the Static Analysis of Ethereum Smart Contracts Matteo Maffei||1|
|Invited Talk: Concolic Testing of Higher-order Functional Languages Konstantinos (Kostis) Sagonas||2|
|Extended Abstract: Challenges in the Specialisation of Smart Horn Clause Interpreters John P. Gallagher||3|
|Understanding Counterexamples for Relational Properties with DIbugger Mihai Herda, Michael Kirsten, Etienne Brunner, Joana Plewnia, Ulla Scheler, Chiara Staudenmaier, Benedikt Wagner, Pascal Zwick and Bernhard Beckert||6|
|On Quantitative Comparison of Chemical Reaction Network Models Ozan Kahramanoğulları||14|
|Relational Verification via Invariant-Guided Synchronization Qi Zhou, David Heath and William Harris||28|
|Ultimate TreeAutomizer (CHC-COMP Tool Description) Daniel Dietsch, Matthias Heizmann, Jochen Hoenicke, Alexander Nutz and Andreas Podelski||42|
|Proving Properties of Sorting Programs: A Case Study in Horn Clause Verification Emanuele De Angelis, Fabio Fioravanti, Alberto Pettorossi and Maurizio Proietti||48|
This volume contains the joint post-proceedings of the 3rd Workshop on Program Equivalence and Relational Reasoning (PERR 2019) and the 6th Workshop on Horn Clauses for Verification and Synthesis (HCVS 2019), which took place in Prague, Czech Republic on April 6 and 7, respectively, as affiliated workshops of the European Joint Conferences on Theory and Practice of Software (ETAPS). PERR and HCVS are both concerned with formal program verification and related questions, with PERR focussing on a particular type of program properties and HVCS focussing on the use of Horn Clauses for reasoning about programs. The workshop topics have an overlap, e.g. techniques based on Horn clauses are also used to conduct equivalence proofs, and it is natural to publish the proceedings in a common volume.
Program Equivalence is a topic touching on multiple communities within the field of formal program analysis. The PERR workshops are dedicated to the formal verification of program equivalence and related relational problems. This edition is the third in a series of meetings that bring together researchers from different areas interested in equivalence. The topic touches upon many aspects of formal methods: regression verification, translation validation, verified compilation, language semantics, deductive verification, (bounded) model checking, specification inference, software evolution and testing.
The goal of the workshop is to bring researchers of the different fields in touch and to stipulate an exchange of ideas leading to forging a community working on PERR. It welcomes contributions from the aforementioned topics, but is also open to new questions regarding program equivalence. This includes related research areas of relational reasoning like program refinement or the verification of hyper-properties, in particular of secure information flow.
The workshop featured eight presentations, and two invited talks by Margus Veanes (Microsoft Research) and Marco Eilers (ETH Zürich).
Many Program Verification and Synthesis problems of interest can be modeled directly using Horn clauses, and many recent advances in the Constraint/Logic Programming, Verification, and Automated Deduction have centered around solving problems presented as Horn clauses efficiently. The aim of the HCVS workshop series is to bring together researchers working in the communities of Constraint/Logic Programming (e.g., ICLP and CP), Program Verification (e.g., CAV, TACAS, and VMCAI), and Automated Deduction (e.g., CADE), on the topic of Horn clause based analysis, verification and synthesis. Horn clauses for verification and synthesis have been advocated by these communities at different times and from different perspectives, and this workshop is organized to stimulate interaction and a fruitful exchange and integration of experiences.
This edition follows five previous meetings: HCVS 2018 in Oxford, UK (with FLoC), HCVS 2017 in Gothenburg, Sweden (with CADE), HCVS 2016 in Eindhoven, The Netherlands (with ETAPS), HCVS 2015 in San Francisco, CA, USA (with CAV), and HCVS 2014 in Vienna, Austria (with VSL).
The workshop featured six papers, a report on the Second Competition on Constrained Horn Clauses (CHC-COMP), and two invited talks by Matteo Maffei (TU Wien, Austria) and Konstantinos Sagonas (Uppsala University, Sweden).
We wish to express our gratitude to the authors who submitted papers, the speakers, the invited speakers, the program committee members for the work they have generously done. We would like to thank the ETAPS general chair Jan Kofroň and the ETAPS workshops chairs Milan Ceska (Brno University of Technology, Czech Republic) and Ryan Culpepper (Czech Technical University, Czech Republic) for their help in making the event possible. Our thanks go also to Artem Pelenitsyn (Northeastern University, United States) who assisted us in the managing the workshops websites. Finally, we warmly thank the EasyChair organization for supporting the various tasks related to the selection of the contributions, and also EPTCS and arXiv for hosting the proceedings.
Emanuele De Angelis, Grigory Fedyukovich, Nikos Tzevelekos and Mattias Ulbrich.
The recent growth of the blockchain technology market puts its main cryptocurrencies in the spotlight. Among them, Ethereum stands out due to its virtual machine (EVM) supporting smart contracts, i.e., distributed programs that control the flow of the digital currency Ether. Being written in a Turing complete language, Ethereum smart contracts allow for expressing a broad spectrum of financial applications. The price for this expressiveness, however, is a significant semantic complexity, which increases the risk of programming errors. Recent attacks exploiting bugs in smart contract implementations call for the design of formal verification techniques for smart contracts. This, however, requires rigorous semantic foundations, a formal characterization of the expected security properties, and dedicated abstraction techniques tailored to the specific EVM semantics.
This tutorial will overview the state-of-the-art in smart contract verification, covering formal semantics, security definitions, and verification tools. We will then focus on EtherTrust, a framework for the static analysis of Ethereum smart contracts that we recently introduced, which includes the first complete small-step semantics of EVM bytecode, the first formal characterization of a large class of security properties for smart contracts, and the first static analysis for EVM bytecode, based on Horn clause resolution, that comes with a proof of soundness.
Concolic testing is a fully automatic software testing technique that combines concrete and symbolic execution of a program unit in an attempt to explore all the code paths in this unit or at least explore all its paths up to a depth bound. In this talk, we will describe how concolic testing can be applicable to high-level languages in general and to functional programming languages in particular. For such languages, the concolic engine needs to efficiently support pattern matching, recursive data types such as lists, recursion and higher-order functions. We will also briefly talk about the engineering effort that concolic testing tools require, in particular in interfacing with SMT solvers.
Time permitting, the talk will also include a demo of CutEr (as in "more cute"), a concolic testing tool for Erlang and some of the bugs in the implementation of Erlang/OTP that CutEr has discovered.
Automatic program specialisation has re-emerged in the last few years as a key technique of meta-programming. Notable recent applications using specialisation include collapsing towers of interpreters in the system stack , high-performance implementations of widely-used dynamic languages , SQL query compilation  and code obfuscation . Program specialisation has fundamental connections to language compilation , optimisation , and recursive function theory (the S-m-n theorem) . Indeed, in , the authors state that their technique is based on "an old but under-appreciated idea known as Futamura projections, which fundamentally link interpreters and compilers".
This abstract focusses on the challenges of achieving the first Futamura projection , which refers to the specialisation of an interpreter with respect to a program with unknown input data. Consider an interpreter $Int$ as a function $Int: Prog \times I \rightarrow O$, where $I$ and $O$ are the input and output domains respectively for programs in $Prog$. Specialising $Int$ with respect to a program $p \in Prog$, a version of $Int$ is obtained, called $Int_p$, that interprets $p$ but no other program. This is denoted as a function $Int_p : I \rightarrow O$. In short, $Int_p$ intertwines $Int$ with $p$ and has the same signature as $p$ (meta-language encoding is ignored here). $Int_p$ can be regarded as a "compilation" of $p$ into the language of the interpreter (which could be the same language). The semi-formal concept of "Jones-optimality"  is sometimes used to describe the ideal specialisation in which the "meta-layer" of interpretation of $p$ is completely removed in $Int_p$.
Smart interpreters. Jones-optimality applies in practice to "standard" interpreters that are driven by the syntactic structure of the object program. The recent works cited above could all be considered to be of this kind. However, the most far-reaching potential of interpreter specialisation for program transformation arises from the specialisation of "smart" interpreters, which can be arbitrarily complex and incorporate run-time operations such as memoisation, constraint propagation and dynamic task management. Such operations are not guided by the structure of the input program, but rather depend on dynamic structures such as memo-tables, task pools, and constraint stores, which are hard to "compile away".
The overhead of smart interpreters can be prohibitive and limit their use to problems where the overhead is offset by the savings. Thus there is a great incentive to remove the overhead of smart interpreters with respect to a given object program. This suggestion is well known ; however, the complexity of smart interpreters has so far presented a barrier to realising its potential.
For example, super-linear speed-ups are known to be achievable by specialisation of a memoising interpreter, as pointed out by Jones ; a classical example of a transformation achievable in principle is the generation of a linear implementation Fibonacci function from the recursive functional definition, whose straightforward execution requires exponential time. However, the difficulties of effective specialisation of a memoising interpreter are considerable and have so far (to our knowledge) defied efforts to specialise a memoising interpreter; the problems are discussed briefly below.
Despite the difficulties, a foundation for a breakthrough in interpreter specialisation is available, provided by the spectacular advances in tools and techniques for reasoning about and transforming programs. New abstract interpretation techniques and powerful back-end constraint solvers  have played a crucial role in the recent successes of scalable program analysis and verification tools.
The following three challenges outline the promises, difficulties and potential solutions for specialisation of smart interpreters. The common theme is that their solution requires a combination of techniques from abstract interpretation, verification and constraint solving. Constrained Horn clauses (CHCs) have been shown to be an effective framework for exploiting these advances for software verification . It is argued that CHCs offer similar advantages in the quest to realise the potential of smart interpreter specialisation. We propose to implement the interpreters sketched below as CHCs, and specialise them using a polyvariant specialisation techniques as recently reported , which makes use of program analysis techniques to derive invariant properties that guide the specialisation. Recent work by Nys and De Schreye  on "compiling control" shares some of the goals of specialising smart interpreters.
Challenge 1: A memoising interpreter. A memoising interpreter maintains a table during execution, recording the function calls that arise and their results when they are returned. Each time a function is called, the memo-table is consulted, and so repeated executions of the same function call are avoided. Memoising can result in exponential speed-up.
Applying the first Futamura projection, one would expect at least that the memo-table becomes "compiled in" to the interpreted program as a data structure, and every function call is replaced by a conditional statement that checks whether the call's result is already available in the table. To achieve effective specialisation (say for the Fibonacci example), more is needed. For some function calls in the program, it can be detected statically that the memo-table definitely contains the result, while for others, the memo-table lookup will definitely fail. Furthermore, the memo-table needs only to contain the calls that might possibly be needed later, otherwise it will contain many useless entries.
Sufficiently precise abstractions of the memo-table during specialisation are needed to achieve this. Recent results on static cache analysis (for example ) provide inspiration for possible approaches. Caches are essentially memo-tables for memory reads, and one wants to know in a source program which read operations will definitely cause a cache hit and which ones will miss, and furthermore one wants to ensure that the cache contains only live values that might be hit.
Challenge 2: A constraint propagating interpreter.Many problems are conveniently stated as "generate-and-test"; the space of potential solutions is given by a generate function, while the test function checks whether the candidate solutions satisfy the required constraints. Naive interpretation of generate-and-test as a solution procedure is usually impractical since the solution space is too large to be enumerated. A constraint propagating interpreter explores the generate and test functions in parallel, applying principles of constraint programming  to limit search, propagate constraints, identify inconsistencies early and discover invariants. The specialisation of a constraint propagating interpreter is a significant challenge, inviting cross-fertilisation of constraint programming with verification and program analysis; Pelleau et al.  point to many connections such as the use of over-approximations obtained by abstract interpretation to reduce search.
Challenge 3: A parallelising interpreter. The first Futamura projection shows that a sequential program can be "compiled" to a parallel program, where $Int$ is written in a parallel language, and the object program $p$ is written in a sequential language. The first Futamura projection yields a parallel program $Int_p$ and requires a specialiser for a parallel programming language; this topic has received a lot of attention but presents a number of problems compared to specialisation of sequential programs, due to the need to preserve synchronisation between parallel processes. The interpreter $Int$ manages a pool of parallel processes and computing resources; the purpose of the specialisation is to perform as much "compile-time" scheduling as possible. Abstractions of the process pool are needed during specialisation, a challenging problem requiring input from the state of the art in timing analysis and cache analysis.