Algorithmic information theory

Algorithmic information theory

Algorithmic information theory is a subfield of information theory and computer science that concerns itself with the relationship between computation and information. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously." [ [ Algorithmic Information Theory ] ]


Algorithmic information theory principally studies complexity measures on strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers and real numbers.

This use of the term "information" might be a bit misleading, as it depends upon the concept of compressibility. Informally, from the point of view of algorithmic information theory, the information content of a string is equivalent to the length of the shortest possible self-contained representation of that string. A self-contained representation is essentially a program – in some fixed but otherwise irrelevant universal programming language – that, when run, outputs the original string.

From this point of view, a 3000 page encyclopedia actually contains less information than 3000 pages of completely random letters, despite the fact that the encyclopedia is much more useful. This is because to reconstruct the entire sequence of random letters, one must know, more or less, what every single letter is. On the other hand, if every vowel were removed from the encyclopedia, someone with reasonable knowledge of the English language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc hs lw nfrmtn cntnt" from the context and consonants present. For this reason, high-information strings and sequences are sometimes called "random"; people also sometimes attempt to distinguish between "information" and "useful information" and attempt to provide rigorous definitions for the latter, with the idea that the random letters may have more information than the encyclopedia, but the encyclopedia has more "useful" information.

Unlike classical information theory, algorithmic information theory gives formal, rigorous definitions of a random string and a random infinite sequence that do not depend on physical or philosophical intuitions about nondeterminism or likelihood. (The set of random strings depends on the choice of the universal Turing machine used to define Kolmogorov complexity, but any choicegives identical asymptotic results because the Kolmogorov complexity of a string is invariant up to an additive constant depending only on the choice of universal Turing machine. For this reason the set of random infinite sequences is independent of the choice of universal machine.)

Some of the results of algorithmic information theory, such as Chaitin's incompleteness theorem, appear to challenge common mathematical and philosophical intuitions. Most notable among these is the construction of Chaitin's constant Ω, a real number which expresses the probability that a self-delimiting universal Turing machine will halt when its input is supplied by flips of a fair coin (sometimes thought of as the probability that a random computer program will eventually halt). Although Ω is easily defined, in any consistent axiomatizable theory one can only compute finitely many digits of Ω, so it is in some sense "unknowable", providing an absolute limit on knowledge that is reminiscent of Gödel's Incompleteness Theorem. Although the digits of Ω cannot be determined, many properties of Ω are known; for example, it is an algorithmically random sequence and thus its binary digits are evenly distributed (in fact it is normal).


The field was developed by Andrey Kolmogorov, Ray Solomonoff and Gregory Chaitin, starting in the late 1960s. There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974). Per Martin-Löf also contributed significantly to the information theory of infinite sequences. An axiomatic approach to algorithmic information theory based on Blum axioms (Blum 1967) was introduced by Mark Burgin in a paper presented for publication by Andrey Kolmogorov (Burgin 1982). The axiomatic approach encompasses other approaches in the algorithmic information theory. It is possible to treat different measures of algorithmic information as particular cases of axiomatically defined measures of algorithmic information. Instead, of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to algorithmic information theory was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003).

Precise Definitions

A binary string is said to be random if the Kolmogorov complexity of the string is at least the length of the string. A simple counting argument shows that some strings of any given length are random, and almost all strings are very close to being random. Since Kolmogorov complexity depends on a fixed choice of universal Turing machine (informally, a fixed "description language" in which the "descriptions" are given), the collection of random strings does depend on the choice of fixed universal machine. Nevertheless, the collection of random strings, as a whole, has similar properties regardless of the fixed machine, so one can (and often does) talk about the properties of random strings as a group without having to first specify a universal machine.

An infinite binary sequence is said to be random if, for some constant "c", the Kolmogorov complexity of "every" initial segment (with length "n") of the sequence is at least "n-c". Importantly, the complexity used here is prefix-free complexity; if plain complexity were used, there would be no random sequences. However, with this definition, it can be shown that almost every sequence (from the point of view of the standard measure - "fair coin" or Lebesgue measure - on the space of infinite binary sequences) is random. Also, since it can be shown that the Kolmogorov complexity relative to two different universal machines differs by at most a constant, the collection of random infinite sequences does not depend on the choice of universal machine (in contrast to finite strings). This definition of randomness is usually called "Martin-Löf" randomness, after Per Martin-Löf, to distinguish it from other similar notions of randomness. It is also sometimes called "1-randomness" to distinguish it from other stronger notions of randomness (2-randomness, 3-randomness, etc.).

(Related definitions can be made for alphabets other than the set {0,1}.)

See also

* Kolmogorov complexity
* Algorithmically random sequence
* Algorithmic probability
* Chaitin's constant
* Chaitin–Kolmogorov randomness
* Computationally indistinguishable
* Distribution ensemble
* Epistemology
* Invariance theorem
* Limits to knowledge
* Minimum description length
* Minimum message length
* Pseudorandom ensemble
* Pseudorandom generator
* Uniform ensemble

External links

* [ Algorithmic Information Theory (Scholarpedia)]
* [ Chaitin's account of the history of AIT] .


Further reading

* Blum, M. (1967) On the Size of Machines, Information and Control, v. 11, pp. 257-265
* Blum M. (1967a) A Machine-independent Theory of Complexity of Recursive Functions, Journal of the ACM, v. 14, No.2, pp. 322-336
* Burgin, M. (1982) Generalized Kolmogorov complexity and duality in theory of computations, Soviet Math. Dokl., v.25, No. 3, pp.19-23
* Burgin, M. (1990) Generalized Kolmogorov Complexity and other Dual Complexity Measures, Cybernetics, No. 4, pp. 21-29
* Burgin, M. "Super-recursive algorithms", Monographs in computer science, Springer, 2005
* Calude, C.S. (1996) Algorithmic information theory: Open problems, J. UCS, v. 2, pp. 439-441
* Calude, C.S. "Information and Randomness: An Algorithmic Perspective", (Texts in Theoretical Computer Science. An EATCS Series), Springer-Verlag, Berlin, 2002
* Chaitin, G.J. (1966) On the Length of Programs for Computing Finite Binary Sequences, J. Association for Computing Machinery, v. 13, No. 4, pp. 547-569
* Chaitin, G.J. (1969) On the Simplicity and Speed of Programs for Computing Definite Sets of Natural Numbers, J. Association for Computing Machinery, v. 16, pp. 407-412
* Chaitin, G.J. (1975) A Theory of Program Size Formally Identical to Information Theory, J. Association for Computing Machinery, v. 22, No. 3, pp. 329-340
* Chaitin, G.J. (1977) Algorithmic information theory, IBM Journal of Research and Development, v.21, No. 4, 350-359
* Chaitin, G.J. "Algorithmic Information Theory", Cambridge University Press, Cambridge, 1987
* Kolmogorov, A.N. (1965) Three approaches to the definition of the quantity of information, Problems of Information Transmission, No. 1, pp.3-11
* Kolmogorov, A.N. (1968) Logical basis for information theory and probability theory, IEEE Trans. Inform. Theory, vol. IT-14, pp. 662-664
* Levin, L. A. (1974) Laws of information (nongrowth) and aspects of the foundation of prob¬ability theory, Problems of Information Transmission, v. 10, No. 3, pp. 206-210
* Levin, L.A. (1976) Various Measures of Complexity for Finite Objects (Axiomatic Description), Soviet Math. Dokl., v. 17, pp. 522-526
* Li, M., and Vitanyi, P. "An Introduction to Kolmogorov Complexity and its Applications", Springer-Verlag, New York, 1997
* Solomonoff, R.J. (1960) "A Preliminary Report on a General Theory of Inductive Inference", Technical Report ZTB-138, Zator Company, Cambridge, Mass.
* Solomonoff, R.J. (1964) A Formal Theory of Inductive Inference, Information and Control, v. 7, No. 1, pp. 1-22; No.2, pp. 224-254
* Van Lambagen, (1989) Algorithmic Information Theory, Journal for Symbolic Logic, v. 54, pp. 1389-1400
* Zurek, W.H. (1991) Algorithmic Information Content, Church-Turing Thesis, physical entropy, and Maxwell’s demon, in Complexity, Entropy and the Physics of Information, (Zurek, W.H., Ed.) Addison-Wesley, pp. 73-89
* Zvonkin, A.K. and Levin, L. A. (1970) The Complexity of Finite Objects and the Development of the Concepts of Information and Randomness by Means of the Theory of Algorithms, Russian Mathematics Surveys, v. 256, pp. 83-124

Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Algorithmic learning theory — (or algorithmic inductive inference) is a framework for machine learning.The framework was introduced in E. Mark Gold s seminal paper Language identification in the limit . The objective of language identification is for a machine running one… …   Wikipedia

  • information theory — the mathematical theory concerned with the content, transmission, storage, and retrieval of information, usually in the form of messages or data, and esp. by means of computers. [1945 50] * * * ▪ mathematics Introduction       a mathematical… …   Universalium

  • Information theory — Not to be confused with Information science. Information theory is a branch of applied mathematics and electrical engineering involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental… …   Wikipedia

  • Structural information theory — (SIT) is a theory about human perception and, in particular, about perceptual organization, that is, about the way the human visual system organizes a raw visual stimulus into objects and object parts. SIT was initiated, in the 1960s, by Emanuel… …   Wikipedia

  • List of information theory topics — This is a list of information theory topics, by Wikipedia page.*A Mathematical Theory of Communication *algorithmic information theory *arithmetic encoding *channel capacity *Communication Theory of Secrecy Systems *conditional entropy… …   Wikipedia

  • Information — as a concept has a diversity of meanings, from everyday usage to technical settings. Generally speaking, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning,… …   Wikipedia

  • Theory — The word theory has many distinct meanings in different fields of knowledge, depending on their methodologies and the context of discussion.In science a theory is a testable model of the manner of interaction of a set of natural phenomena,… …   Wikipedia

  • Information science — Not to be confused with Information theory. Contents 1 Introduction 2 A multitude of information sciences? 3 Definitions of information science 4 History …   Wikipedia

  • Physical information — In physics, physical information refers generally to the information that is contained in a physical system. Its usage in quantum mechanics (i.e. quantum information) is important, for example in the concept of quantum entanglement to describe… …   Wikipedia

  • Information art — (or informatism ) is an emerging field of electronic art that synthesizes computer science, information technology, and more classical forms of art, including performance art, visual art, new media art and conceptual art. [Edward A. Shanken has… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”