Published: 18th April 2017|
|Preface Guillaume Bonfante and Georg Moser|
|Invited Talk: Resource Analysis of Distributed and Concurrent Programs Elvira Albert||1|
|Invited Talk: Whole Systems Energy Transparency: More Power to Software Developers! Kerstin Eder||2|
|Invited Talk: Challenges for Timing Analysis of Multi-Core Architectures Jan Reineke||4|
|Invited Talk: On Resource Analysis of Imperative Programs Lars Kristiansen||6|
|Automated Sized-Type Inference and Complexity Analysis Martin Avanzini and Ugo Dal Lago||7|
|GUBS Upper Bound Solver (Extended Abstract) Martin Avanzini and Michael Schaper||17|
|Towards Practical, Precise and Parametric Energy Analysis of IT Controlled Systems Bernard van Gastel and Marko van Eekelen||24|
|Computability in the Lattice of Equivalence Relations Jean-Yves Moyen and Jakob Grue Simonsen||38|
|Loop Quasi-Invariant Chunk Motion by peeling with statement composition Jean-Yves Moyen, Thomas Rubiano and Thomas Seiller||47|
This proceedings collects accepted regular papers and selected extended abstracts presented at the first joint international workshop on Developments in Implicit Computational complExity (DICE) and FOundational and Practical Aspects of Resource Analysis (FOPARA) which is held in Uppsala, Sweden, from April 22-23, 2017 as part of ETAPS.
The DICE workshop explores the area of Implicit Computational Complexity (ICC), which grew out from several proposals to use logic and formal methods to provide languages for complexity-bounded computation (e.g. PTIME, Logspace computation). It aims at studying the computational complexity of programs without referring to external measuring conditions or a particular machine model, but only by considering language restrictions or logical/computational principles entailing complexity properties.
The FOPARA workshop serves as a forum for presenting original research results that are relevant to the analysis of resource (e.g. time, space, energy) consumption by computer programs. The workshop aims to bring together the researchers that work on foundational issues with the researchers that focus more on practical results. Therefore, both theoretical and practical contributions are encouraged. We also encourage papers that combine theory and practice.
Given the complementarity and the synergy between these two communities, and following the successful experience of co-location of DICE-FOPARA 2015 in London at ETAPS 2015, we hold these two workshops together for the first time at ETAPS 2017.
DICE-FOPARA serves as a forum for presenting original and established research results that are relevant to the implicit computational complexity theory and to the analysis of resource (e.g. time, space, energy) consumption by computer programs. The workshop aims to bring together the researchers that work on foundational issues with the researchers that focus more on practical results. Therefore, both theoretical and practical contributions are encouraged, as well as papers that combine theory and practice.
We acknowledge the support by the program ANR-14-CE25-0005-3, ELICA and DARPA/AFRL contract number FA8750-17-C-088 as well as the provided help by the ETAPS organisation. Last but not least, we want to thank EPTCS for their continuing support in publishing these proceedings.
Distributed and concurrent systems are composed of distributed nodes that communicate and coordinate their actions, and concurrent tasks that interleave their execution within the distributed nodes. Resource analysis of distributed concurrent systems needs to consider the distribution, communication and task interleaving aspects of the systems. In this talk, we will describe the basic framework proposed for the resource analysis of distributed concurrent systems, together with the new notions of cost that arise in such context. In particular, we will discuss the notions of: peak cost that captures the maximum amount of resources that each distributed node might require along the whole execution; and parallel cost which corresponds to the maximum cost of the execution by taking into account that, when distributed tasks run in parallel, we need to account only for the cost of the most expensive one.
Energy efficiency is now a major, if not the major, constraint in electronic systems engineering. Significant progress has been made in low power hardware design for more than a decade. The potential for savings is now far greater at the higher levels of abstraction in the system stack. The greatest savings are expected from energy consumption-aware software . Designing software for energy efficiency requires visibility of energy consumption from the hardware, where the energy is consumed, all the way through to the programs that ultimately control what the hardware does. This visibility is termed energy transparency .
My lecture emphasizes the importance of energy transparency from hardware to software as a foundation for energy aware software engineering. Energy transparency is a concept that enables a deeper understanding of how algorithms and coding impact on the energy consumption of a computation when executed on hardware. It is a key prerequisite for informed design space exploration and helps system designers find the optimal tradeoff between performance, accuracy and energy consumption of a computation . Promoting energy efficiency to a first class software design goal is an urgent research challenge that must be addressed to achieve truly energy efficient systems.
I will start by illustrating how to measure energy consumption of software for embedded platforms . This enables monitoring energy consumption of software at runtime, providing insights into the power, energy and timing behaviour of algorithms with some seemingly surprising results .
Energy models can be built to predict the energy consumption of programs based on execution statistics or trace data obtained from a simulator, or statically by employing advanced static resource consumption analysis techniques. I will introduce our approach to energy consumption modelling at the Instruction Set Architecture (ISA) level , comparing energy profiles of different instructions as well as exposing the impact of data width on energy consumption.
We then focus on two approaches to static analysis for energy consumption estimation: one based on solving recurrence equations, the other based on implicit path enumeration (IPET). With the former it is possible to extract parameterized energy consumption functions [10,6,9], while the latter produces absolute values . Analysis can be performed either directly at the ISA level [10,5] or at the Intermediate Representation (IR) of the compiler, here the LLVM IR [9,6,5]. The latter is enabled through a novel mapping technique that associates costs at the ISA level with entities at the LLVM IR level .
A critical review of the assumptions underlying both analysis approaches, in particular, an investigation into the impact of using a cost model that assigns a single energy consumption cost to each instruction, leads to a better understanding of the limitations of static resource bound analysis techniques on the safety and tightness of the bounds retrieved. The fact that energy consumption is inherently data dependent will be illustrated for a selected set of instructions, some with rather beautiful heat maps, with conclusions being drawn on how energy consumption analysis differs from execution time analysis, and an intuition into why analysing for worst-case dynamic energy is infeasible in general . This leads me to discuss new research challenges for energy consumption modelling as well as for static energy consumption analysis.
I will close with a call to giving ``more power'' to software developers so that ``cooler'' programs can be written in the future.
In real-time systems, timely computation of outputs is as important as computing the correct output values. Timing analysis is a fundamental step in proving that all timing constraints of an application are met. Given a program and a microarchitecture, the task of timing analysis is to determine an upper bound on the response time of the program on the given microarchitecture under all possible circumstances. While this task is in general undecidable, it can be solved approximatively with sound abstractions. Current microarchitectural developments make timing analysis increasingly difficult: contemporary processor architectures employ deep pipelines, branch predictors, and caches to improve performance. Further, multi-core processors share buses, caches, and other resources, introducing interference between logically-independent programs. I will discuss three challenges arising from these developments, and approaches to overcome these challenges:
To what extent is it possible to estimate a program's resource requirements by analyzing the program code? I started my research on resource analysis of imperative programs about 17 years ago, and I published my last paper on the subject about five years ago. I will try to share some of the insights I gained during those years. I will also reflect a little bit upon the nature of this type of research: Is this pure theoretical computer science? Should we aim for real-life applications? To what extent can we expect such applications? Recently it has turned out the theory I (together with Amir Ben-Amram and Neil Jones) developed for resource analysis may have real-life applications when it is re-used as compiler theory. Towards the end of my talk I will explain these applications.