Published: 3rd December 2020|
|Preface Matt Luckcuck, Marie Farrell and Michael Fisher|
|Invited Talk: Verifying Machine Ethics Louise Dennis|
|Invited Talk: Runtime Verification with Copilot 3 Ivan Perez|
|How to Formally Model Human in Collaborative Robotics Mehrnoosh Askarpour||1|
|Towards Compositional Verification for Modular Robotic Systems Rafael C. Cardoso, Louise A. Dennis, Marie Farrell, Michael Fisher and Matt Luckcuck||15|
|From Requirements to Autonomous Flight: An Overview of the Monitoring ICAROUS Project Aaron Dutle, César Muñoz, Esther Conrad, Alwyn Goodloe, Laura Titolo, Ivan Perez, Swee Balachandran, Dimitra Giannakopoulou, Anastasia Mavridou and Thomas Pressburger||23|
|YAP: Tool Support for Deriving Safety Controllers from Hazard Analysis and Risk Assessments Mario Gleirscher||31|
|A Formal Model for Quality-Driven Decision Making in Self-Adaptive Systems Fatma Kachi, Chafia Bouanaka and Souheir Merkouche||48|
Autonomous systems are highly complex and present unique challenges for the application of formal methods. Autonomous systems act without human intervention, and are often embedded in a robotic system, so that they can interact with the real world. As such, they exhibit the properties of safety-critical, cyber-physical, hybrid, and real-time systems.
This EPTCS volume contains the proceedings for the second workshop on Formal Methods for Autonomous Systems (FMAS 2020), which was held virtually on the 7th of December 2020. FMAS 2020 was an online, stand-alone workshop, as an adaptation to the ongoing COVID-19 restrictions. Despite the challenges this brought, we aimed to build on the success of the first FMAS workshop, held in 2019.
The goal of FMAS is to bring together leading researchers who are tackling the unique challenges of autonomous systems using formal methods, to present recent and ongoing work. We are interested in the use of formal methods to specify, model, or verify autonomous or robotic systems; in whole or in part. We are also interested in successful industrial applications and potential future directions for this emerging application of formal methods.
FMAS 2020 encouraged the submission of both long and short papers. In total, we received five long papers and one short paper, by authors in Algeria, Canada, India, the United Kingdom, and the United States of America. Each paper received four reviews, and we accepted five papers in total: four long papers and one short paper.
FMAS 2020 hosted two invited speakers. Louise A. Dennis, from the University of Manchester, was invited to talk about verifying machine ethics. This talk focused on the foundations of ethics and how autonomous systems can be made to be explicitly and verifiably ethical. Ivan Perez, from the National Institute of Aerospace and the NASA Formal Methods Group, was invited to talk about Copilot 3, which is a runtime verification framework for real-time embedded systems. This talk described how Copliot synthesises monitor code from a variety of temporal logic specifications.
Despite the disruption caused by the COVID-19 pandemic, our programme committee provided detailed, high quality reviews, which offered useful and constructive feedback to the authors. We thank them for their time and the effort that they have put into their reviews this year, especially given the current, stressful global situation. We would like to thank our invited speakers, Louise A. Dennis and Ivan Perez; the authors who submitted papers; our EPTCS editor, Martin Wirsing; and all of the attendees of FMAS 2020.
Matt Luckcuck, Marie Farrell, and Michael Fisher
Department of Computer Science, University of Manchester, Manchester, UK
Machine ethics is concerned with the challenge of constructing ethical and ethically behaving artificial agents and systems. One important theme within machine ethics concerns explicitly ethical agents – those which are not ethical simply because they are constrained by their programming or deployment to be so but which use a concept of ethics in some way as part of their operation. Normally this requires the provision of rules, utilities or priorities by a programmer, knowledge engineer or user. In this talk I will address the question of how such explicitly ethical programs can be verified. What kind of properties can we consider and what kind of errors might we find?
Ultra-critical systems require high-level assurance, which cannot always be guaranteed in compile time. The use of runtime verification (RV) [2,1] enables monitoring these systems in runtime, to detect property violations early and limit their potential consequences. However, the introduction of monitors in ultra-critical systems poses a challenge, as failures and delays in the RV subsystem could affect other subsystems and threaten the mission as a whole.
In this talk we present Copilot 3 , a runtime verification framework for real-time embedded systems. Copilot monitors are written in a compositional, stream-based language with support for a variety of temporal logics (e.g., Linear Temporal Logic , Past-time Linear Temporal Logic , Metric Temporal Logic ). Copilot also includes multitude of libraries with functions like, for example, majority vote, used to implement fault-tolerant monitors . All of this which results in robust, high-level specifications that are easier to understand than low-level imperative implementations. The Copilot framework then translates monitor specifications into C99 code with static memory requirements, which can be compiled to run on embedded hardware.
We also discuss how Copilot has been used in experimental research by NIA, NASA and other organizations, its place in relation to other RV frameworks, and possible future directions for RV in autonomous systems.
This talk is based upon joint work with Alwyn Goodloe and Frank Dedden.