Published: 15th April 2019
DOI: 10.4204/EPTCS.292
ISSN: 2075-2180

EPTCS 292

Proceedings Joint International Workshop on
Linearity & Trends in Linear Logic and Applications
Oxford, UK, 7-8 July 2018

Edited by: Thomas Ehrhard, Maribel Fernández, Valeria de Paiva and Lorenzo Tortora de Falco

Preface
Termination of Lambda-Calculus Linearisation Methods
Sandra Alves
Formalization of Automated Trading Systems in a Concurrent Linear Framework
Iliano Cervesato, Sharjeel Khan, Giselle Reis and Dragiša Žunić
1
The Bang Calculus and the Two Girard's Translations
Giulio Guerrieri and Giulio Manzonetto
15
From Linear Logic to Cyclic Sharing
Masahito Hasegawa
31
On the Lambek Calculus with an Exchange Modality
Jiaming Jiang, Harley Eades III and Valeria de Paiva
43
Taking Linear Logic Apart
Wen Kokke, Fabrizio Montesi and Marco Peressotti
90
Coherent Interaction Graphs
Lê Thành Dũng Nguyen and Thomas Seiller
104
The ILLTP Library for Intuitionistic Linear Logic
Carlos Olarte, Valeria de Paiva, Elaine Pimentel and Giselle Reis
118
Quantum Programming Made Easy
Luca Paolini, Luca Roversi and Margherita Zorzi
133
Proof Nets, Coends and the Yoneda Isomorphism
Paolo Pistone
148

Preface

This volume contains a selection of the papers presented at the Joint Linearity and Trends in Linear Logic and Applications workshop (Linearity-TLLA 2018), held in Oxford on the 7-8 July 2018, as part of FLOC 2018.

Linearity has been a key feature in several lines of research in both theoretical and practical approaches to computer science. On the theoretical side there is much work stemming from linear logic dealing with proof technology, complexity classes and more recently quantum computation. On the practical side there is work on program analysis, expressive operational semantics for programming languages, linear programming languages, program transformation, update analysis and efficient implementation techniques.

Linear logic is not only a theoretical tool to analyse the use of resources in logic and computation. It is also a corpus of tools, approaches, and methodologies (proof nets, exponential decomposition, geometry of interaction, coherent spaces, relational models, etc.) that were originally developed for the study of linear logic's syntax and semantics and are nowadays applied in several other fields.

The aim of the Linearity-TLLA workshop is to bring together researchers who are currently working on linear logic and related fields, to foster their interaction and provide a forum for presenting new ideas and work in progress, and to enable newcomers to learn about current activities in this area. The main goal is to present and discuss new trends in Linear Logic and its applications, by means of tutorials, invited talks, open discussions, and contributed talks. New results that make central use of linearity, ranging from foundational work to applications in any field, are welcome. Also welcome are more exploratory presentations, which may examine open questions and raise fundamental concerns about existing theories and practices.

The programme of the workshop consisted of eighteen contributed papers and four invited talks, by Sandra Alves, Amina Doumane, Michele Pagani and Frank Pfenning. After a second round of reviewing, nine papers were selected for publication in the Linearity-TLLA 2018 post-proceedings. EasyChair was used to manage the reviewing process and the generation of the preproceedings.

The Programme Committee of Linearity-TLLA 2018 consisted of: Vito Michele Abrusci, Thomas Ehrhard (co-chair), Maribel Fernández (co-chair), Stefano Guerrini, Masahito Hasegawa, Olivier Laurent, Paul-André Melliès, Valeria de Paiva (co-chair), Elaine Pimentel, Simona Ronchi della Rocca, Christine Tasson, Lorenzo Tortora de Falco (co-chair).

We would like to thank all those who contributed to Linearity-TLLA 2018, particularly the invited speakers, programme committee members and local organisers.

Thomas Ehrhard, Maribel Fernández, Valeria de Paiva and Lorenzo Tortora de Falco
1st March 2019


Termination of Lambda-Calculus Linearisation Methods

Sandra Alves


The notion of linearisation of the $\lambda$-calculus, as a process for transforming (or simulating) non-linear functions into (or using) "equivalent" linear functions, was first introduced by Assaf Kfoury [7], and has since been used as a transformation technique in several settings. Directly related to the notion of linearisation by Kfoury, is the use of linearisation in hardware compilation [6] for synthesising digital circuits, in which a notion of serialization is defined to deal with contraction in concurrent contexts, based on type information that indicates bounds for the usage of functions. Kfoury's linearisation is also closely related to the notion of affine approximations by Mazza et al [9], which identifies distinct occurrences of variables that apply linearly to sequences of arguments. A $\lambda$-term admits a simply-typed affine approximation if and only if it has an intersection type (in a non-idempotent and non-commutative intersection type system).

In Kfoury's paper, the notion of linearisation is defined indirectly, through an embedding of the $\lambda$-calculus into a new calculus, denoted $\Lambda^{\wedge}$, with a new notion of "linear" reduction, denoted $\beta^{\wedge}$. Kfoury had two major questions regarding $\Lambda^{\wedge}$: can the standard $\lambda$-calculus be simulated by $\Lambda^{\wedge}$; what are the advantages of compiling the standard $\lambda$-calculus into this linear calculus, if such compilation is possible? To address these questions, Kfoury defined a notion of contraction of expanded terms in the new calculus into standard $\lambda$-terms, and a notion of lifting of $\beta$-reductions into $\beta^{\wedge}$-reductions. Well-formed terms of the new calculus are those for which there is a contracted term in the $\lambda$-calculus. Kfoury conjectured that for strongly normalising $\lambda$-terms there are corresponding well-formed expanded terms.

In several approaches dealing with linearisation, intersection types [4] play a crucial role. Note that, both the contraction relation of Kfoury and the affine approximations of Mazza, are defined by means of an intersection type system. Another linearisation procedure, using information given by intersection types, was defined by Florido and Damas [5], by means of a notion of expansion of terms in the $\lambda$-calculus into linear terms. One of the implications of the expansion relation is that, any term typable using intersection types, can be expanded to a term typable using simple types, similarly to what happens with affine approximations. Also, Bucciarelli et al. [3], have given a translation of intersection typing derivations into Curry typeable terms, preserving $\beta$-reduction. Expansion based on intersection types has some issues: one needs to consider the different algebraic properties of $\cap$, which will lead to different expansion relations and different properties on preservation of reduction; the linearisation/expansion relies on the type information, and type inference for intersections is undecidable for the general case.

A different perspective was followed by Alves and Florido [1], by using a characterisation of virtual redexes as paths in the initial term, deriving from Lévy's definition of labelled reduction [8]. A different question that was posed in this work, was the possibility of simulating the $\lambda$-calculus, by a proper linear subset of the calculus, rather than by a non-standard calculus. In this case the subset considered was called the weak linear $\lambda$-calculus. Weak linear terms have good properties: non-duplicating reduction, therefore strong normalisation; it is decidable to know if a $\lambda$-term is weak linear; and type inference for weak linear terms is both decidable and polynomial. Linearisation from standard $\lambda$-terms into weak linear terms was defined by computing legal paths [2] on the initial term, identifying virtual redexes. The linearisation procedure preserves $\beta$-normal forms, provided that paths are chosen in the right way. Furthermore, if the procedure terminates then it yields a weak linear $\lambda$-term. However, similarly to Kfoury's linearisation, there is also a conjecture regarding termination: the linearisation procedure is a total function for the strong normalising $\lambda$-calculus.

In this talk we explore how these previous works are related and discuss the open conjectures on the termination of linearisation methods. In particular, we discuss possible metrics on the set of paths used in the linearisation procedure by Alves and Florido, and what happens to the set of paths at each step of the linearisation procedure. More precisely, the linearisation procedures follows a chain $S_1 \gg S_2 \gg \dotsb \gg S_n\gg \dotsb$, of sets of paths. Each iteration of the linearisation procedure may cause paths in lower sets to become non-linear again or increase the number of paths in higher sets. An appropriate metric should guarantee that the procedure is not continuously going back-and-forward, and it does not cause the set of paths to increase indefinitely. Previous attempts at defining this metric have taken into account free occurrences of variables in the body and argument of the linearised function; the number of paths affected by a linearisation step; the order of the paths in the chain followed by the procedure, etc. The appropriate metric is yet to be found, but these attempts suggest that it should somehow measure an underlying degree of "parallelism" in paths that are created by the linearisation procedure.

We also observe what happens when applying the procedure to non-normalising terms, in which case the procedure does not terminate, due to an infinite set of paths. However, when restricting the chain $S_1 \gg S_2 \gg \dotsb \gg S_n$, up to a certain level, empirical evidence suggests that the procedure always terminates. These results point to the possibility of defining a notion of bounded linearisation, giving approximations to weak linear terms, which might prove sufficient for practical applications of linearisation.

Finally, we consider the relation between linearisation and intersection types, which indirectly establishes a connection between linearisation based on paths and Kfoury's linearisation, therefore offering a way to connect the two open conjectures.

References
[1] Sandra Alves and Mário Florido (2005): Weak linearization of the lambda calculus. Theoretical Computer Science 342(1), pp. 79-103.
[2] Andrea Asperti and Cosimo Laneve (1995): Paths, computations and labels in the lambda-calculus. Theoretical Computer Science 142(2), pp. 277-297.
[3] Antonio Bucciarelli, Silvia De Lorenzis, Adolfo Piperno and Ivano Salvo (1999): Some Computational Properties of Intersection Types. In: 14th Annual IEEE Symposium on Logic in Computer Science, Trento, Italy, July 2-5, 1999, pp. 109-118.
[4] M. Coppo and M. Dezani-Ciancaglini (1980): An extension of the basic functionality theory for the lambda-calculus. Notre Dame Journal of Formal Logic 21(4), pp. 685-693.
[5] Mário Florido and Luis Damas (2004): Linearisation of the lambda-calculus and its relation with intersection type systems. Journal of Functional Programming 14(5), pp. 519-546.
[6] Dan R. Ghica and Alex Smith (2011): Geometry of Synthesis III: Resource Management Through Type Inference. In: Proceedings of the 38th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL'11, ACM, New York, NY, USA, pp. 345-356, doi:10.1145/1926385.1926425.
[7] A. J. Kfoury (2000): A linearization of the Lambda-calculus and consequences. Journal of Logic and Computation 10(3), pp. 411-436.
[8] J.J. Lévy (1978): Réductions correctes et optimales dans le lambda calcul. Ph.D. thesis, Université Paris VII.
[9] Damiano Mazza, Luc Pellissier and Pierre Vial (2018): Polyadic Approximations, Fibrations and Intersection Types. Proceedings of the ACM on Programming Languages 2(POPL:6).