Safe Policy Improvement by Minimizing Robust Baseline Regret

Conference Paper

Abstract

  • An important problem in sequential decision-making under uncertainty is to use limited data to compute a safe policy, i.e., a policy that is guaranteed to perform at least as well as a given baseline strategy. In this paper, we develop and analyze a new model-based approach to compute a safe policy when we have access to an inaccurate dynamics model of the system with known accuracy guarantees. Our proposed robust method uses this (inaccurate) model to directly minimize the (negative) regret w.r.t. the baseline policy. Contrary to the existing approaches, minimizing the regret allows one to improve the baseline policy in states with accurate dynamics and seamlessly fall back to the baseline policy, otherwise. We show that our formulation is NP-hard and propose an approximate algorithm. Our empirical results on several domains show that even this relatively simple approximate algorithm can significantly outperform standard approaches.
  • Authors

  • Petrik, Marek
  • Ghavamzadeh, Mohammad
  • Chow, Yinlam
  • Status

    Publication Date

  • 2016
  • Start Page

  • 1
  • End Page

  • 25
  • Volume

  • 29