A Human-Centered Data-Driven Planner-Actor-Critic Architecture via Logic Programming

Daoming Lyu
(Auburn University)
Fangkai Yang
(NVIDIA Corporation)
Bo Liu
(Auburn University)
Steven Gustafson
(Maana Inc.)

Recent successes of Reinforcement Learning (RL) allow an agent to learn policies that surpass human experts but suffers from being time-hungry and data-hungry. By contrast, human learning is significantly faster because prior and general knowledge and multiple information resources are utilized. In this paper, we propose a Planner-Actor-Critic architecture for huMAN-centered planning and learning (PACMAN), where an agent uses its prior, high-level, deterministic symbolic knowledge to plan for goal-directed actions, and also integrates the Actor-Critic algorithm of RL to fine-tune its behavior towards both environmental rewards and human feedback. This work is the first unified framework where knowledge-based planning, RL, and human teaching jointly contribute to the policy learning of an agent. Our experiments demonstrate that PACMAN leads to a significant jump-start at the early stage of learning, converges rapidly and with small variance, and is robust to inconsistent, infrequent, and misleading feedback.

In Bart Bogaerts, Esra Erdem, Paul Fodor, Andrea Formisano, Giovambattista Ianni, Daniela Inclezan, German Vidal, Alicia Villanueva, Marina De Vos and Fangkai Yang: Proceedings 35th International Conference on Logic Programming (Technical Communications) (ICLP 2019), Las Cruces, NM, USA, September 20-25, 2019, Electronic Proceedings in Theoretical Computer Science 306, pp. 182–195.
Published: 19th September 2019.

ArXived at: https://dx.doi.org/10.4204/EPTCS.306.23 bibtex PDF
References in reconstructed bibtex, XML and HTML format (approximated).
Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org