- Marcus Hutter
-
Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science. In 2000 he joined Jürgen Schmidhuber's group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff's theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.
Hutter's notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter's only assumption is that the reactions of the environment in response to the agent's actions follow some unknown but computable probability distribution.
Contents
Universal artificial intelligence
Hutter uses Solomonoff's inductive inference as a mathematical formalization of Occam's razor.[1] Hutter adds to this formalization the expected value of an action: shorter (kolmogorov complexity) computable theories have more weight when calculating the expected value of an action across all computable theories which perfectly describe previous observations.[2]
At any time, given the limited observation sequence so far, what is the Bayes-optimal way of selecting the next action? Hutter proved that the answer is to use Solomonoff's universal prior to predict the future, and execute the first action of the action sequence that will maximize the predicted reward up to the horizon. He called this universal algorithm AIXI.
This is mainly a theoretical result. To overcome the problem that Solomonoff's prior is incomputable, in 2002 Hutter also published an asymptotically fastest algorithm for all well-defined problems. Given some formal description of a problem class, the algorithm systematically generates all proofs in a sufficiently powerful axiomatic system that allows for proving time bounds of solution-computing programs. Simultaneously, whenever a proof has been found that shows that a particular program has a better time bound than the previous best, a clever resource allocation scheme will assign most of the remaining search time to this program. Hutter showed that his method is essentially as fast as the unknown fastest program for solving problems from the given class, save for an additive constant independent of the problem instance. For example, if the problem size is n, and there exists an initially unknown program that solves any problem in the class within n7 computational steps, then Hutter's method will solve it within 5n7 + O(1) steps. The additive constant hidden in the O() notation may be large enough to render the algorithm practically infeasible despite its useful theoretical properties.
Several algorithms approximate AIXI in order to make it run on a modern computer, at the expense of its perfect optimality.[3][4][5]
Hutter Prize for Lossless Compression of Human Knowledge
On August 6, 2006, Hutter announced the Hutter Prize for Lossless Compression of Human Knowledge with an initial purse of 50,000 Euros, the intent of which is to encourage the advancement of artificial intelligence through the exploitation of Hutter's theory of optimal universal artificial intelligence.
Partial bibliography
- Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability ISBN 3-540-22139-5
- On Generalized Computable Universal Priors and their Convergence. Theoretical Computer Science, 2005.
- Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet. Journal of Machine Learning Research 4, 971-1000, 2003.
- The Fastest and Shortest Algorithm for All Well-Defined Problems. International Journal of Foundations of Computer Science, 13:3 (2002) 431-443, 2002.
References
- ^ On the existence and convergence of computable universal priors from arxiv.org M Hutter - Algorithmic Learning Theory, 2003 - Springer
- ^ Universal artificial intelligence: Sequential decisions based on algorithmic probability M Hutter - 2005 - books.google.com
- ^ A Monte Carlo AIXI Approximation J Veness, KS Ng, M Hutter… - Arxiv preprint arXiv, 2009 - arxiv.org
- ^ Reinforcement Learning via AIXI Approximation from arxiv.org J Veness, KS Ng, M Hutter… - Arxiv preprint arXiv:1007.2049, 2010 - aaai.org
- ^ A computational approximation to the AIXI model from agiri.org S Pankov - Artificial general intelligence, 2008: proceedings of …, 2008 - books.google.com
External links
Categories:- 1967 births
- Living people
- Machine learning researchers
- German computer scientists
- Australian academics
- Australian National University faculty
Wikimedia Foundation. 2010.