This paper reviews algorithmic information theory, which is an attempt to apply information-theoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.

Historical Introduction

To our knowledge, the first publication of the ideas of algorithmic information theory was the description of R. J. Solomonoff’s ideas given in 1962 by M. L. Minsky in his paper, “Problems of formulation for artificial intelligence” [1]:

Consider a slightly different form of inductive inference problem. Suppose that we are given a very long “data” sequence of symbols; the problem is to make a prediction about the future of the sequence. This is a problem familiar in discussion concerning “inductive probability.” The problem is refreshed a little, perhaps, by introducing the modern notion of universal computer and its associated language of instruction formulas. An instruction sequence will be considered acceptable if it causes the computer to produce a sequence, perhaps infinite, that begins with the given finite “data” sequence. Each acceptable instruction sequence thus makes a prediction, and Occam’s razor would choose the simplest such sequence and advocate its prediction. (More generally, one could weight the different predictions by weights associated with the simplicities of the instructions.) If the simplicity function is just the length of the instructions, we are then trying to find a minimal description, i.e., an optimally efficient encoding of the data sequence.

Such an induction method could be of interest only if one could show some significant invariance with respect to choice of defining universal machine. There is no such invariance for a fixed pair of data strings. For one could design a machine which would yield the entire first string with a very small input, and the second string only for some very complex input. On the brighter side, one can see that in a sense the induced structure on the space of data strings has some invariance in an “in the large” or “almost everywhere” sense. Given two different universal machines, the induced structures cannot be desperately different. We appeal to the “translation theorem” whereby an arbitrary instruction formula for one machine may be converted into an equivalent instruction formula for the other machine by the addition of a constant prefix text. This text instructs the second machine to simulate the behavior of the first machine in operating on the remainder of the input text. Then for data strings much larger than this translation text (and its inverse) the choice between the two machines cannot greatly affect the induced structure. It would be interesting to see if these intuitive notions could be profitably formalized.

Even if this theory can be worked out, it is likely that it will present overwhelming computational difficulties in application. The recognition problem for minimal descriptions is, in general, unsolvable, and a practical induction machine will have to use heuristic methods. [In this connection it would be interesting to write a program to play R. Abbott’s inductive card game [2].]

Algorithmic information theory originated in the independent work of Solomonoff (see [1, 3–6]), of A. N. Kolmogorov and P. Martin-L¨of (see [7–14]), and of G. J. Chaitin (see [15–26]). Whereas Solomonoff weighted together all the programs for a given result into a probability measure, Kolmogorov and Chaitin concentrated their attention on the size of the smallest program. Recently it has been realized by Chaitin and independently by L. A. Levin that if programs are stipulated to be self-delimiting, these two differing approaches become essentially equivalent. This paper attempts to cast into a unified scheme the recent work in this area by Chaitin [23,24] and by R. M. Solovay [27,28]. The reader may also find it interesting to examine the parallel efforts of Levin (see [29–35]). There has been a substantial amount of other work in this general area, often involving variants of the definitions deemed more suitable for particular applications (see, e.g., [36–47])


By : G. J. Chaitin


Previous A Mathematical Theory Of Communication
Next The Mythical Man-Month