Published: 15th October 2013
DOI: 10.4204/EPTCS.131
ISSN: 2075-2180


Proceedings 6th
Interaction and Concurrency Experience
Florence, Italy, 6th June 2013

Edited by: Marco Carbone, Ivan Lanese, Alberto Lluch Lafuente and Ana Sokolova

Invited Presentation: Coinductive techniques for higher-order languages
Davide Sangiorgi
Contract agreements via logic
Massimo Bartoletti, Tiziana Cimoli, Paolo Di Giamberardino and Roberto Zunino
Extended Connectors: Structuring Glue Operators in BIP
Eduard Baranov and Simon Bliudze
Invited Presentation: Three Algorithms for Must Semantics
Filippo Bonchi
On Context Bisimulation for Parameterized Higher-order Processes
Xian Xu
Choreography Synthesis as Contract Agreement
Julien Lange and Alceste Scalas
On Recovering from Run-time Misbehaviour in ADR
Kyriakos Poyias and Emilio Tuosto
ABS-NET: Fully Decentralized Runtime Adaptation for Distributed Objects
Karl Palmskog, Mads Dam, Andreas Lundblad and Ali Jafari


This volume contains the proceedings of ICE'13, the 6th Interaction and Concurrency Experience workshop, which was held in Florence, Italy on the 6th of June 2013 as a satellite event of DisCoTec'13. The ICE workshop series has a main distinguishing aspect: the workshop features a novel review and selection procedure.
The previous editions of ICE were affiliated to ICALP'08 (Reykjavik, Iceland), CONCUR'09 (Bologna, Italy), DisCoTec'10 (Amsterdam, The Netherlands), DisCoTec'11 (Reykjavik, Iceland) and DisCoTec'12 (Stockholm, Sweden).
The ICE procedure for paper selection allows PC members to interact, anonymously, with authors. During the review phase, each submitted paper is published on a forum and associated with a discussion forum whose access is restricted to the authors and to all the PC members not declaring a conflict of interests. The PC members post comments and questions that the authors reply to. As at the past five editions, the forum discussion during the review and selection phase of ICE'13 has considerably improved the accuracy of the feedback from the reviewers and the quality of accepted papers, and offered the basis for a lively discussion during the workshop. The interactive selection procedure implies additional effort for both authors and PC members. The effort and time spent in the forum interaction is rewarding for both authors -- when updating their papers -- and reviewers -- when writing their reviews: the forum discussion helps to discover and correct typos in key definitions and mispelled statements, to improve examples and presentation of critical cases, and solve any misunderstanding at the very early stage of the review process. Each paper was reviewed by three PC members, and altogether 6 papers were accepted for publication. We were proud to host two invited talks by Davide Sangiorgi and Filippo Bonchi, whose abstracts are included in this volume together with the regular papers. The workshop also featured a brief announcement of an already published paper. Final versions of the contributions, taking into account the discussion at the workshop, are included in the same order as they were presented at the workshop.
We would like to thank the authors of all the submitted papers for their interest in the workshop and the PC members for their efforts, which provided for the success of the ICE workshop. We thank Davide Sangiorgi and Filippo Bonchi for accepting our invitations to present their recent work, and the DisCoTec'13 organisers, in particular the general and workshop chairs, for providing an excellent environment for the preparation and staging of the event. Finally, we thank EPTCS editors for the publication of post-proceedings.

Marco Carbone -- ITU (Denmark)
Ivan Lanese -- University of Bologna/INRIA (Italy)
Alberto Lluch Lafuente -- IMT Lucca (Italy)
Ana Sokolova -- University of Salzburg (Austria)

Coinductive techniques for higher-order languages

Davide Sangiorgi (University of Bologna and INRIA)


This is a summary and a brief introduction for the talk presented at ICE 2013.
Equivalence proof of computer programs is an important but challenging problem. Equivalence between two programs means that the programs should behave ``in the same manner'' under any context [19]. Finding effective methods for equivalence proofs is particularly challenging in higher-order languages (i.e., languages where program code can be passed around like other data). The prototypical higher-order language is the lambda-calculus, the basis of functional languages. The methods should also scale up to richer languages, including non-functional features (non-determinism, information hiding mechanisms like generative names, store, data abstraction, etc.).
Bisimulation has emerged as a very powerful operational method for proving equivalence of programs in various kinds of languages, due to the associated co-inductive proof method. Further, a number of enhancements of the bisimulation method have been studied, usually called up-to techniques. To be useful, the behavioral relation resulting from bisimulation---bisimilarity---should be a congruence. Bisimulation has been transplanted onto higher-order languages by Abramsky [1]. This version of bisimulation, called applicative bisimulations, and variants of it, have received considerable attention [10,12,23,25,17]. In short, two functions P and Q are applicatively bisimilar when their applications P(M) and Q(M) are applicatively bisimilar for any argument M.
In the talk, I will review the limitations of applicative bisimulations. For instance, they do not scale very well to languages richer than pure lambda-calculus. For instance, they are unsound under the presence of generative names [14] or data abstraction [30] because they apply bisimilar functions to an identical argument. Secondly, congruence proofs of applicative bisimulations are notoriously hard. Such proofs usually rely on Howe's method [13]. The method appears however rather subtle and fragile, for instance under the presence of generative names [14], non-determinism [13], or concurrency (e.g., [6,7]). Also, the method is very syntactical and lacks good intuition about when and why it works. Related to the problems with congruence are also the difficulties of applicative bisimulations with ``up-to context'' techniques (the usefulness of these techniques in higher-order languages and its problems with applicative bisimulations have been extensively studied by Lassen [17]; see also [25,16]).
Congruence proofs for bisimulations usually exploit the bisimulation method itself to establish that the closure of the bisimilarity under contexts is again a bisimulation. To see why, intuitively, this proof does not work for applicative bisimulation, consider a pair of bisimilar functions P_1, Q_1 and another pair of bisimilar terms P_2, Q_2. In an application context they yield the terms P_1 P_2 and Q_1 Q_2 which, if bisimilarity is a congruence, should be bisimilar. However the argument for the functions P_1 and Q_1 are bisimilar, but not necessarily identical: hence we are unable to apply the bisimulation hypothesis on the functions.
I will then discuss proposals for improving applicative bisimilarity, notably environmental bisimulations [28] and logical bisimulations [27]. Finally, I will consider extensions of the language with probability. Probabilistic models are more and more pervasive. Not only they are a formidable tool when dealing with uncertainty and incomplete information, but they sometimes are a necessity rather than an alternative, like in computational cryptography (where, e.g., secure public key encryption schemes need to be probabilistic [8]). A nice way to deal computationally with probabilistic models is to allow probabilistic choice as a primitive when designing algorithms, this way switching from usual, deterministic computation to a new paradigm, called probabilistic computation. Examples of application areas in which probabilistic computation has proved to be useful include natural language processing [18], robotics [31], computer vision [5], and machine learning [21].
This new form of computation, of course, needs to be available to programmers to be accessible. And indeed, various programming languages have been introduced in the last years, spanning from abstract ones [15,24,20] to more concrete ones [22,9], being inspired by various programming paradigms like imperative, functional or even object oriented. A quite common scheme consists in endowing any deterministic language with one or more primitives for probabilistic choice, like binary probabilistic choice or primitives for distributions.
One class of languages which cope well with probabilistic computation are functional languages. Indeed, viewing algorithms as functions allows a smooth integration of distributions into the playground, itself nicely reflected at the level of types through monads [11,24]. As a matter of fact, many existing probabilistic programming languages [22,9] are designed around the lambda-calculus or one of its incarnations, like Scheme. All these allows to write higher-order functions (programs can take functions as inputs and produce them as outputs). I will go through some recent work with M. Alberti and U. Dal Lago [2] on bisimulation and context equivalence in a probabilistic lambda-calculus. Firstly I show a technique for proving congruence of probabilistic applicative bisimilarity. While the technique follows Howe's method, some of the technicalities are quite different, relying on non-trivial ``disentangling'' properties for sets of real numbers, these properties themselves proved by tools from linear algebra. Secondly I show that, while bisimilarity is in general strictly finer than context equivalence, coincidence between the two relations is attained on pure lambda-terms. The resulting equality is that induced by Levy-Longo trees, generally accepted as the finest extensional equivalence on pure lambda-terms under a lazy regime. For the proof, here I follow techniques in [26,29]. Finally, I derive a coinductive characterisation of context equivalence on the whole probabilistic language, via an extension in which terms akin to distributions may appear in redex position. Another motivation for the extension is that its operational semantics allows us to experiment with a different congruence technique, namely that of logical bisimilarity.


  1. Samson Abramsky (1990): The Lazy Lambda Calculus. In: David A. Turner: Research Topics in Functional Programming. Addison-Wesley, pp. 65–117, doi:
  2. Michele Alberti, Ugo Dal Lago & Davide Sangiorgi (2013): On Coinductive Equivalences for Higher-Order Probabilistic Functional Programs. Submitted.
  3. Setsuo Arikawa & Koichi Furukawa (1999): Proceedings of the Second International Conference on Discovery Science. Lecture Notes in Artificial Intelligence 1721. Springer-Verlag.
  4. Setsuo Arikawa & Hiroshi Motoda (1998): Proceedings of the First International Conference on Discovery Science. Lecture Notes in Artificial Intelligence 1532. Springer-Verlag.
  5. Dorin Comaniciu, Visvanathan Ramesh & Peter Meer (2003): Kernel-based object tracking. IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(5), pp. 564–577, doi:10.1109/TPAMI.2003.1195991.
  6. William Ferreira, Matthew Hennessy & Alan Jeffrey (1998): A Theory of Weak Bisimulation for Core CML. Journal of Functional Programming 8(5), pp. 447–491, doi:10.1017/S0956796898003165.
  7. Jens Chr. Godskesen & Thomas Hildebrandt (2005): Extending Howe's Method to Early Bisimulations for Typed Mobile Embedded Resources with Local Names. In: Foundations of Software Technology and Theoretical Computer Science, Lecture Notes in Computer Science 3821. Springer-Verlag, pp. 140–151, doi:10.1007/11590156_11.
  8. Shafi Goldwasser & Silvio Micali (1984): Probabilistic Encryption. J. Comput. Syst. Sci. 28(2), pp. 270–299, doi:10.1016/0022-0000(84)90070-9.
  9. Noah D. Goodman (2013): The principles and practice of probabilistic programming. In: POPL, pp. 399–402. Available at
  10. Andrew D. Gordon (1993): Functional Programming and Input/Output. University of Cambridge.
  11. Andrew D. Gordon, Mihhail Aizatulin, Johannes Borgström, Guillaume Claret, Thore Graepel, Aditya V. Nori, Sriram K. Rajamani & Claudio V. Russo (2013): A model-learner pattern for bayesian reasoning. In: POPL, pp. 403–416. Available at
  12. Andrew D. Gordon & Gareth D. Rees (1996): Bisimilarity for a First-Order Calculus of Objects with Subtyping. In: Proceedings of the 23rd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 386–395, doi:10.1145/237721.237807.
  13. Douglas J. Howe (1996): Proving Congruence of Bisimulation in Functional Programming Languages. Information and Computation 124(2), pp. 103–112, doi:10.1006/inco.1996.0008.
  14. Alan Jeffrey & Julian Rathke (1999): Towards a Theory of Bisimulation for Local Names. In: 14th Annual IEEE Symposium on Logic in Computer Science, pp. 56–66, doi:10.1109/LICS.1999.782586.
  15. C. Jones & Gordon D. Plotkin (1989): A Probabilistic Powerdomain of Evaluations. In: LICS, pp. 186–195.
  16. Vassileios Koutavas & Mitchell Wand (2006): Small Bisimulations for Reasoning About Higher-Order Imperative Programs. In: Proceedings of the 33rd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 141–152, doi:10.1145/1111037.1111050.
  17. S. B. Lassen (1998): Relational Reasoning about Functions and Nondeterminism. Department of Computer Science, University of Aarhus.
  18. Christopher D Manning & Hinrich Schütze (1999): Foundations of statistical natural language processing 999. MIT Press.
  19. James H. Morris, Jr. (1968): Lambda-Calculus Models of Programming Languages. Massachusetts Institute of Technology.
  20. Sungwoo Park, Frank Pfenning & Sebastian Thrun (2008): A probabilistic language based on sampling functions. ACM Trans. Program. Lang. Syst. 31(1). Available at
  21. Judea Pearl (1988): Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann.
  22. Avi Pfeffer (2001): IBAL: A Probabilistic Rational Programming Language. In: IJCAI. Morgan Kaufmann, pp. 733–740, doi:
  23. Andrew Pitts (1997): Operationally-Based Theories of Program Equivalence. In: Andrew M. Pitts & P. Dybjer: Semantics and Logics of Computation, Publications of the Newton Institute. Cambridge University Press, pp. 241–298, doi:10.1017/CBO9780511526619.
  24. Norman Ramsey & Avi Pfeffer (2002): Stochastic lambda calculus and monads of probability distributions. In: POPL, pp. 154–165. Available at
  25. David Sands (1998): Improvement theory and its applications. In: Andrew D. Gordon & Andrew M. Pitts: Higher Order Operational Techniques in Semantics, Publications of the Newton Institute. Cambridge University Press, pp. 275–306, doi:10.1145/292540.292547.
  26. D. Sangiorgi (1994): The Lazy Lambda Calculus in a Concurrency Scenario. Inf. and Comp. 111(1), pp. 120–153, doi:10.1006/inco.1994.1042.
  27. Davide Sangiorgi, Naoki Kobayashi & Eijiro Sumii (2007): Logical Bisimulations and Functional Languages. In: Proc. Fundamentals of Software Engineering (FSEN), LNCS 4767. Springer-Verlag, pp. 364–379, doi:10.1007/978-3-540-75698-9_24.
  28. Davide Sangiorgi, Naoki Kobayashi & Eijiro Sumii (2011): Environmental bisimulations for higher-order languages. ACM Trans. Program. Lang. Syst. 33(1), pp. 5, doi:10.1145/1889997.1890002.
  29. Davide Sangiorgi & David Walker (2001): The pi-Calculus – a theory of mobile processes. Cambridge University Press.
  30. Eijiro Sumii & Benjamin C. Pierce (2005): A Bisimulation for Type Abstraction and Recursion. In: Proceedings of the 32nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 63–74, doi:10.1145/1040305.1040311.
  31. Sebastian Thrun (2002): Robotic mapping: A survey. Exploring artificial intelligence in the new millennium, pp. 1–35, doi:

Three Algorithms for Must Semantics

Filippo Bonchi (Ecole Normale Supérieure de Lyon, France)

Checking language equivalence (or inclusion) of finite automata is a classical problem in computer science, which has recently received a renewed interest and found novel and more effective solutions, such as the approaches based on antichains or bisimulations up-to.
Several notions of equivalences (or preorders) have been proposed for the analysis of concurrent systems. Usually, the problem of checking these equivalences is reduced to the problem of checking bisimilarity.
In this work, we take a different approach and propose to "adapt" algorithms for language semantics. More precisely, we introduce an analogous of Brzozowski's algorithm and an algorithm based on bisimulations up-to for checking must equivalence and preorder.