Improving Competence for Reliable Autonomy

Connor Basich
(University of Massachusetts Amherst)
Justin Svegliato
(University of Massachusetts Amherst)
Kyle Hollins Wray
(Alliance Innovation Lab Silicon Valley)
Stefan J. Witwicki
(Alliance Innovation Lab Silicon Valley)
Shlomo Zilberstein
(University of Massachuetts Amherst)

Given the complexity of real-world, unstructured domains, it is often impossible or impractical to design models that include every feature needed to handle all possible scenarios that an autonomous system may encounter. For an autonomous system to be reliable in such domains, it should have the ability to improve its competence online. In this paper, we propose a method for improving the competence of a system over the course of its deployment. We specifically focus on a class of semi-autonomous systems known as competence-aware systems that model their own competence—the optimal extent of autonomy to use in any given situation—and learn this competence over time from feedback received through interactions with a human authority. Our method exploits such feedback to identify important state features missing from the system's initial model, and incorporates them into its state representation. The result is an agent that better predicts human involvement, leading to improvements in its competence and reliability, and as a result, its overall performance.

In Rafael C. Cardoso, Angelo Ferrando, Daniela Briola, Claudio Menghi and Tobias Ahlbrecht: Proceedings of the First Workshop on Agents and Robots for reliable Engineered Autonomy (AREA 2020), Virtual event, 4th September 2020, Electronic Proceedings in Theoretical Computer Science 319, pp. 37–53.
Published: 23rd July 2020.

ArXived at: https://dx.doi.org/10.4204/EPTCS.319.4 bibtex PDF
References in reconstructed bibtex, XML and HTML format (approximated).
Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org