GMontag wrote:I don't see why infinite sequences would depend any less on the programming language than finite sequences. Your argument about including a specialized function for the sequence applies equally well to infinite sequences (assuming of course that the sequence is computable in the first place).
I didn't actually explain the definition for infinite sequences, so you probably came up with a different one which is why you don't think it would apply. The real definition can be found in one of the wikipedia articles I linked above. Here is a rather informal* explanation:*Informal means I'm talking in terms of "programming languages", "interpreters", and other commonly understood notions from computer science rather than in terms of universal prefix-free Turing machines.
For finite sequences, we defined the sequence as random if the length of the shortest program producing it is at least as long as the sequence itself, and nonrandom if the program is shorter. From now on, we'll assume that our programs are composed entirely of 0s and 1s, and so our are strings, we'll write |x| for the length of the string x, and we'll define the (Kolmogorov) complexity C(x) as the length of the shortest program producing x. Now in our new notation, x is nonrandom if C(x)<|x|.
In order to define randomness for infinite strings, we need to modify our definition of complexity C and replace it by what is called the "prefix-free" Kolmogorov complexity K. We'll replace our language by what is called a "prefix-free" language, which means that one valid program is never an initial segment of another. For example, if 0010100110 is a valid program, then 00101 cannot be, nor can 00101001101. Now we define K(x) as complexity relative to our new prefix-free language. (Why we do this is a bit mysterious at first, but it turns out that the definition we'll give of random strings is equivalent to two other definitions which on the surface appear to be natural definitions but quite different from this one, and if we tried to give the same definition using C instead of K we'd end up with no random sequences at all. See Algorithmically random sequence (Wikipedia)
Now that we have K, we define randomness for infinite sequences as follows:
An infinite sequence X is called random if there is some constant c so that for all natural numbers n, K(x)>n-c if x is the string comprised of the first n bits of X.
Why doesn't this depend on our choice of programming language? If we have any programming languages L1
, with corresponding complexities K1
, we make the following observation: we can write, in L1
, an interpreter for L2
. The length of this program will be c. So if for some string x, K2
(x)=n, there is a program P of length n written in L2
which outputs x. So there is a program written in L1
that says "interpret P as an L2
program and output whatever it outputs"; this program has length c+n (c for the interpreter, n for the length of P). So for any x, K1
(x) is at most K2
(x)+c. This argument works in the other direction too, since we can write an interpreter for L1
. So the complexities relative to different languages may be different, but their difference is bounded by some constant. If we look back at the definition of a random infinite sequence, we see that if we alter K by some amount bounded by a constant, we won't change which strings are random.
Also as one consequence of this definition no computable sequences are random, which answers another part of your question.
Exercise: prove that if X is computable, (i) then there's a constant c so that the prefix-free complexity of the first n bits of X is at most 2log2
n + c, (ii) that this is eventually as much smaller than n as you want, and therefore (iii) that X is nonrandom.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.
"With math, all things are possible." —Rebecca Watson