- Automated planning and scheduling
-
Automated planning and scheduling is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space.
In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the strategy often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative trial and error processes commonly seen in artificial intelligence. These include dynamic programming, reinforcement learning and combinatorial optimization. Languages used to describe planning and scheduling are often called action languages.
Contents
Overview
A typical planner takes three inputs: a description of the initial state of the world, a description of the desired goal, and a set of possible actions, all encoded in a formal language such as STRIPS or PDDL. The planner produces a sequence of actions that lead from the initial state to a state meeting the goal. An alternative language for describing planning problems is that of hierarchical task networks, in which a set of tasks is given, and each task can be either realized by a primitive action or decomposed into a set of other tasks.
The difficulty of planning is dependent on the simplifying assumptions employed, such as atomic time, deterministic time, and complete observability. Classical planners, which make all of these assumptions, have been studied most fully. Some popular techniques include forward chaining and backward chaining state space search, possibly enhanced by the use of relationships among conditions (see graphplan) or heuristics synthesized from the problem, search through plan space, and translation to propositional satisfiability (satplan).
Dropping the assumption of determinism and adopting a probabilistic model of uncertainty leads to the problem of policy generation for a Markov decision process (MDP) or (in the general case) partially observable Markov decision process (POMDP).
Preference-based planning
Main article: Preference-based planningIn preference-based planning, the objective is not only to produce a plan but also to satisfy user-specified preferences.
Conditional Planning
Conditional planning operates on an IF (condition) THEN (action) ELSE (action) model. The branching model does not have to be binary. Conditional planners examine the current model and then determine the action to take based on that. This process allows the planning algorithm to examine all possible contingencies and then perform the best action possible given the current status. Conditional planning is, however, limited in that if the number of conditions is sufficiently large the problem can quickly become intractable. In addition, conditional planners are limited in that they can only act upon the specifically enumerated conditions.
Continuous Planning
Continuous planning operates by performing some number of actions, examining the current status of the world, and then re-evaluating it's goals. Continuous planners are allowed to revise their goals in order to satisfy current needs in the process of reaching their ultimate goal. For instance, a continuous planning algorithm may decide that in order to reach its goal, it must first plan for the completion of a sub-goal. This planning process allows continuous planners to operate autonomously as they are built on an open-world assumption.
Examples
- The Hubble Space Telescope uses a short-term system called SPSS and a long-term planning system called Spike.
See also
- Actor model
- Action description language
- Constraint programming
- Evolutionary computation
- Markov chain
- Message passing
- Multi-agent planning
- Partial-order planning
- Planning
- Reactive planning
- Scheduling
- Situation calculus
- Software agent
- Strategy (game theory)
- STRIPS
References
- Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, pp. 375–459, ISBN 0-13-790395-2, http://aima.cs.berkeley.edu/
- Ghallab, Malik; Nau, Dana S.; Traverso, Paolo (2004), Automated Planning: Theory and Practice, Morgan Kaufmann, ISBN 1-55860-856-7, http://www.laas.fr/planning/
External links
Categories:
Wikimedia Foundation. 2010.