Model-Free Learning of Safe yet Effective Controllers

Journal Article

We study the problem of learning safe control policies that are also effective; i.e., maximizing the probability of satisfying a linear temporal logic (LTL) specification of a task, and the discounted reward capturing the (classic) control performance. We consider unknown environments modeled as Markov decision processes. We propose a model-free reinforcement learning algorithm that learns a policy that first maximizes the probability of ensuring safety, then the probability of satisfying the given LTL specification and lastly, the sum of discounted Quality of Control rewards. Finally, we illustrate applicability of our RL-based approach.

Full Text

Duke Authors

Cited Authors

  • Bozkurt, AK; Wang, Y; Pajic, M

Published Date

  • January 1, 2021

Published In

Volume / Issue

  • 2021-December /

Start / End Page

  • 6560 - 6565

Electronic International Standard Serial Number (EISSN)

  • 2576-2370

International Standard Serial Number (ISSN)

  • 0743-1546

Digital Object Identifier (DOI)

  • 10.1109/CDC45484.2021.9683634

Citation Source

  • Scopus