When the optimal policy is independent of the initial state

A Markov decision process (MDP) is a popular model of sequential decision making, but its standard objective of minimizing cumulative cost is often inadequate, for example, to avoid the possibility of large loss. Risk-sensitive objective functions and constraints have thus been proposed for MDPs. Unlike the standard MDP, however, the optimal policy for some of these MDPs can depend on the initial states, so that the optimal policy can change over time. We show that an agent can surely incur larger cumulative cost by following the latest optimal policy at every state than by following other policies. We then establish sufficient conditions on the objective function and on the constraints for the optimal policies to be consistent between the initial states. We also show when the sufficient conditions are necessary. We discuss implications of our results to the MDPs that have been studied in the literature, stating whether their optimal policies depend on the initial states.

By: Takayuki Osogami and Tetsuro Morimura

Published in: RT0966 in 2015

LIMITED DISTRIBUTION NOTICE:

This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.

RT0966.pdf

Questions about this service can be mailed to reports@us.ibm.com .