Distance minimization for reward learning from scored trajectories

Published

Conference Paper

© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Many planning methods rely on the use of an immediate reward function as a portable and succinct representation of desired behavior. Rewards are often inferred from demonstrated behavior that is assumed to be near-optimal. We examine a framework, Distance Minimization IRL (DM-IRL), for learning reward functions from scores an expert assigns to possibly suboptimal demonstrations. By changing the expert's role from a demonstrator to a judge, DM-IRL relaxes some of the assumptions present in IRL, enabling learning from the scoring of arbitrary demonstration trajectories with unknown transition functions. DM-IRL complements existing IRL approaches by addressing different assumptions about the expert. We show that DM-IRL is robust to expert scoring error and prove that finding a policy that produces maximally informative trajectories for an expert to score is strongly NP-hard. Experimentally, we demonstrate that the reward function DM-IRL learns from an MDP with an unknown transition model can transfer to an agent with known characteristics in a novel environment, and we achieve successful learning with limited available training data.

Duke Authors

Cited Authors

  • Burchfiel, B; Tomasi, C; Parr, R

Published Date

  • January 1, 2016

Published In

  • 30th Aaai Conference on Artificial Intelligence, Aaai 2016

Start / End Page

  • 3330 - 3336

International Standard Book Number 13 (ISBN-13)

  • 9781577357605

Citation Source

  • Scopus