Proceedings 12th IEEE Internationals Conference on Tools with Artificial Intelligence. ICTAI 2000
Download PDF

Abstract

Abstract: Markov decision processes (MDP) have been widely used as a framework for planning under uncertainty. They allow to compute optimal sequences of actions in order to achieve a given goal, accounting for actuator uncertainties. But algorithms classically used to solve MDPs are intractable for problems requiring a large state space. Plans are computed considering the whole state space, without using any knowledge about the initial state of the problem. We propose a new technique to build partial plans for a mobile robot, considering only a restricted MDP which contains a small set of states composing a path between the initial state and the goal state. To ensure good quality of the solution, the path has to be very similar to the one which would have been computed on the whole environment. We present a new method to compute partial plans, showing that representing the environment as a directed graph can be very helpful to find near-optimal paths. Partial plans obtained using this method are very similar to complete plans, and computing times are considerably reduced.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!