Approximating Optimal Policies for Partially Observable Stochastic Domains
The problem of making optimaJ decisions in uncertain conditions is central to Artificial Intelligence If the state of the world is known at all times, the world can be modeled as a Markov Decision Pro cess (MDP) MDPs have been studied extensively and many methods are known for determining op timal courses of action or policies The more realistic case where state information is only partially observable Partially Observable Markov Decision Processes (POMDPs) have received much less attention The best exact algorithms for these problems can be very inefficient in both space and lime We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time This mediod can be combined with reinforcement learning meth ods a combination that was very effective in our test cases.