Moderators: gmalivuk, Prelates, Moderators General
enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵╰(ಠ_ಠ ⚠) {exit((int)⚠);}
the chances of finding, say, Moby Dick encoded within the first googol digits is probably infinitesimally small
enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵╰(ಠ_ಠ ⚠) {exit((int)⚠);}
angel.white wrote:Will that be somewhere in the infinite expansion of pi?
angel.white wrote:If it is truly random, we can expect this to eventually happen, but does irrationality imply randomness?
angel.white wrote:Is it possible to get 100 sixes (or any digit) in a row?
enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵╰(ಠ_ಠ ⚠) {exit((int)⚠);}
pollywog wrote:I want to learn this smile, perfect it, and then go around smiling at lesbians and freaking them out.Wikihow wrote:* Smile a lot! Give a gay girl a knowing "Hey, I'm a lesbian too!" smile.
ConMan wrote:However, the number can be said to be "normal", which means that if you fed its digits into a computer it would not be able to distinguish between that sequence of digits and a "true" sequence of random numbers from a uniform distribution. In that sense, you can't predict the next digit in the decimal expansion without calculating it any more than you could predict the roll of a "perfect" d10, and you can't predict the next pair of digits better than you could predict a roll of a "perfect" d100 (ungodly things that they are), and so forth. As said, pi is suspected, but not proven, to be normal, although studies of the existing known decimal expansion have shown reasonably consistent results (but of course, studying the first n decimal places and finding them to match a uniform random distribution does nothing to prevent the expansion from the n+1th place being completely different).
davetp425 wrote:Wow. It's funny that we can calculate the digits of pi without knowing all of their properties. Incidentally, how do they calculate pi? Is there some nice formula like there is for e like\lim_{n\rightarrow \infty}(1+\frac{1}{n})^nthat they can keep reevaluating for higher and higher precision? Also, given that we're unsure whether pi is normal, can we at least find the probability that a given string exists, or does this fall within the realm of uncertainty as opposed to probability?
Some of us exist to find out what can and can't be done.
Others exist to hold the beer.
btilly wrote:davetp425 wrote:Wow. It's funny that we can calculate the digits of pi without knowing all of their properties. Incidentally, how do they calculate pi? Is there some nice formula like there is for e like\lim_{n\rightarrow \infty}(1+\frac{1}{n})^nthat they can keep reevaluating for higher and higher precision? Also, given that we're unsure whether pi is normal, can we at least find the probability that a given string exists, or does this fall within the realm of uncertainty as opposed to probability?
That isn't how we calculate e. The usual approach is to use the series:e = 1/0! + 1/1! + 1/2! + 1/3! + 1/4! + ...
See http://mathworld.wolfram.com/PiFormulas.html (warning, slow to load) for a long list of formulas for pi. See http://www.cs.uwaterloo.ca/~alopez-o/ma ... ode12.html and http://www.cecm.sfu.ca/organics/papers/ ... index.html for two different explanations of methods that can be used to compute large numbers of digits of pi on a computer.
ameretrifle wrote:Magic space feudalism is therefore a viable idea.
jestingrabbit wrote:btilly wrote:davetp425 wrote:Wow. It's funny that we can calculate the digits of pi without knowing all of their properties. Incidentally, how do they calculate pi? Is there some nice formula like there is for e like\lim_{n\rightarrow \infty}(1+\frac{1}{n})^nthat they can keep reevaluating for higher and higher precision? Also, given that we're unsure whether pi is normal, can we at least find the probability that a given string exists, or does this fall within the realm of uncertainty as opposed to probability?
That isn't how we calculate e. The usual approach is to use the series:e = 1/0! + 1/1! + 1/2! + 1/3! + 1/4! + ...
davetp425's limit does evaluate to e and, historically, its how the constant first came to be studied (by one of the Bernoullis). There is no way at present to determine whether a given string occurs. We don't even know if there are an infinite number of 1's in the decimal representation of pi.
Some of us exist to find out what can and can't be done.
Others exist to hold the beer.
phlip wrote:Even 0.123456789101112131415..., which is normal (indeed, I think it was the first number ever proven to be normal), couldn't really be called "random". However, normality means that, if you look for a string in the digits, it'll appear around the same number of times you'd expect from a random sequence (just not necessarily with the same distribution).
Xanthir wrote:Rather, normality is a property that says that, for a given base, all possible combinations of digits in that base occur with the expected "equal" frequency.
troyp wrote:If you chose a point infinitely far in, so to speak
troyp wrote:can someone give me an example of a (preferably finite) set of digits with equally-likely occurrence of any given sequence of any given length
enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵╰(ಠ_ಠ ⚠) {exit((int)⚠);}
gmalivuk wrote:This is your problem. There's no such thing as "infinitely far in". Wherever you end up, you scroll to the right a large enough finite distance and you'll see what number you're on. Then you can predict the rest of the expansion without any problem.
gmalivuk wrote:No finite such set of digits exists. Because of your any given length criterion, we could just pick a length that's one more than the finite length of the string, and be out of luck.troyp wrote:can someone give me an example of a (preferably finite) set of digits with equally-likely occurrence of any given sequence of any given length
phlip wrote:The problem with this thread is that noone's defined what they mean when they say "random" (and I'm guilty of this too).
phlip wrote:The usual definition of a "random sequence" is a sequence of numbers that, even knowing how it's generated, and all the numbers up to this point, it's impossible to predict the next value in the sequence.
gmalivuk wrote:Or I can give you a number, and you can tell me the sequence of digits starting that many places beyond the decimal.
troyp wrote:If you had 5 digits and you looked at all substrings of length 6 (empty set), every substring would be equally likely. That's what I meant by that part. Attempting brevity last night, I worded this request very badly. I thought people would know what I meant. Looking at it now, even I don't know what I meant. Well, I meant, "can someone show me a finite string of digits obeying this property which is clearly not random, so I can see what difference there would be", but I'm not sure exactly how someone would do that with a finite string.
troyp wrote:phlip wrote:The problem with this thread is that noone's defined what they mean when they say "random" (and I'm guilty of this too).
...and me, in particular. I knew I was doing it, but I figured since I was only trying to convey my intuitive reasoning to others, it was okay - but it's not.
Some of us exist to find out what can and can't be done.
Others exist to hold the beer.
skeptical scientist wrote:Sure we can, we just have to wait a while. Even if we know nothing about N other than the fact that it is very large, once the string starts repeating itself from the beginning for a large period, we can predict, with a high degree of certainty, all subsequent digits. Of course, to get to that point, we will have to have already seen more than N many digits, which presumably will never happen. So essentially, this isn't terribly different from your black box just being a random number generator.
skeptical scientist wrote:However, if we knew something about N, we could make stronger predictions.
skeptical scientist wrote:That's one reason why it's a bad idea to conflate "normal" with "random" - random numbers are all normal (at least in the sense that the non-normal numbers have measure 0), but not all normal numbers should be considered random.
Xanthir wrote:skeptical scientist wrote:That's one reason why it's a bad idea to conflate "normal" with "random" - random numbers are all normal (at least in the sense that the non-normal numbers have measure 0), but not all normal numbers should be considered random.
I still want to correct you with "random numbers are almost all normal". That phrasing accurately conveys what you expressed parenthetically.
I can easily express random numbers that are trivially not normal. For example, take any base-10 number that consists of a random sequence of 1s and 0s. This number is entirely "random", in that it cannot be compressed further.
skeptical scientist wrote:I can easily express random numbers that are trivially not normal. For example, take any base-10 number that consists of a random sequence of 1s and 0s. This number is entirely "random", in that it cannot be compressed further.
It can certainly be compressed further, since the amount of bits of information contained in the first N base 10 digits is N, whereas the amount of information contained in the first N digits of a random number is N*log_{2}(10). If you want to use "random" to mean "incompressible", then it's certainly the case that all random numbers are normal.
skeptical scientist wrote:Xanthir wrote:skeptical scientist wrote:That's one reason why it's a bad idea to conflate "normal" with "random" - random numbers are all normal (at least in the sense that the non-normal numbers have measure 0), but not all normal numbers should be considered random.
I still want to correct you with "random numbers are almost all normal". That phrasing accurately conveys what you expressed parenthetically.
No, I wrote what I meant. "Random" numbers should not have measure 0 properties of interest. So the random numbers are all normal, since the non-normal numbers are non-random.
skeptical scientist wrote:I can easily express random numbers that are trivially not normal. For example, take any base-10 number that consists of a random sequence of 1s and 0s. This number is entirely "random", in that it cannot be compressed further.
It can certainly be compressed further, since the amount of bits of information contained in the first N base 10 digits is N, whereas the amount of information contained in the first N digits of a random number is N*log_{2}(10). If you want to use "random" to mean "incompressible", then it's certainly the case that all random numbers are normal.
skeptical scientist wrote:To prove it, you use the fact that the incompressible reals are the same as the Martin-Lof random reals, which are defined in terms of "tests". A test is a measure 0 set presented in a certain way, and a Martin-Lof random real is one which is not a member of any test. So an incompressible real is not a member of any test by the above equivalence, and non-normal numbers form a test. (They are a measure 0 set, and you can present the set of non-normal numbers in the needed way to make it a test.) Therefore incompressible numbers are all normal.
Some of us exist to find out what can and can't be done.
Others exist to hold the beer.
btilly wrote:skeptical scientist wrote:I can easily express random numbers that are trivially not normal. For example, take any base-10 number that consists of a random sequence of 1s and 0s. This number is entirely "random", in that it cannot be compressed further.
It can certainly be compressed further, since the amount of bits of information contained in the first N base 10 digits is N, whereas the amount of information contained in the first N digits of a random number is N*log_{2}(10). If you want to use "random" to mean "incompressible", then it's certainly the case that all random numbers are normal.
Nothing is incompressible. It all depends on your algorithm. For the classic example, I can compress The Mona Lisa to one bit - just have a compression algorithm that uses its first bit to say whether you've got the Mona Lisa, and then uses the rest to encode the picture in a jpeg format.
If this example seems contrived then you need to remember that lossless compression is entirely a game of encoding messages you expect to see efficiently at the cost of making messages you don't expect take more room. The amount of compression you get is therefore always the result of the interaction between the compression algorithm and the message to be compressed.
btilly wrote:Then what you meant is wrong. Any "random" number will necessarily be in a set of measure 0 - namely in the set consisting of just that number. Therefore it is impossible for a random number to not have some properties of measure 0.
Nothing is incompressible. It all depends on your algorithm. For the classic example, I can compress The Mona Lisa to one bit - just have a compression algorithm that uses its first bit to say whether you've got the Mona Lisa, and then uses the rest to encode the picture in a jpeg format.
If this example seems contrived then you need to remember that lossless compression is entirely a game of encoding messages you expect to see efficiently at the cost of making messages you don't expect take more room. The amount of compression you get is therefore always the result of the interaction between the compression algorithm and the message to be compressed.
I was assuming a fixed computable compression scheme to begin with. In this case, most strings are incompressible. Obviously if you allow the compression scheme to vary, you can compress everything, but that's not very interesting or relevant.
So yes, things are incompressible. In fact, almost all things are incompressible. The only 'compression algorithm' that will shorten the length of the message is one that encodes the message itself within it (or at least parts of it), thus making the (final message + algorithm) larger than the message itself.
skeptical scientist wrote:To be more precise, I meant the set \{x \in 2^\omega : K(n)-K(x |_n) \text { is unbounded}\} has measure 0. (Here x|_{n} denotes the string consisting of the first n bits of x, and K is prefix-free Kolmogorov complexity.)
Yakk wrote:So yes, things are incompressible. In fact, almost all things are incompressible. The only 'compression algorithm' that will shorten the length of the message is one that encodes the message itself within it (or at least parts of it), thus making the (final message + algorithm) larger than the message itself.
Well, if you restrict "things" to "finite strings", then to be fair, the algorithm:
If there is a 00 at the start of the data, start the result with a 0.
If there is not, start the result with a 1, and then write out the data.
is a really simple compression algorithm that compresses 25% of all data by 1 bit, and extends the other 75% by 1 bit.
75% isn't "almost all".
You can demonstrate that, for a given fixed algorithm, at most half of all strings up to a particular length can be compressed (pigeonhole), but I don't think you can give a better bound on how good compression can be?
Xanthir wrote:For a single fixed algorithm, sure, but that's hardly interesting, nor does it prove my point wrong. A single fixed algorithm can have its length amortized across the number of messages, which means in practice that it's "free". I made this point in my previous post. If you don't assume a single fixed algorithm, though, then your "compression algorithm" is quite poor indeed, as it doesn't compress a single message! (The algorithm obviously takes greater than 1 bit to express.)
skeptical scientist wrote:To be more precise, I meant the set \{x \in 2^\omega : K(n)-K(x |_n) \text { is unbounded}\} has measure 0. (Here x|_{n} denotes the string consisting of the first n bits of x, and K is prefix-free Kolmogorov complexity.)
Some of us exist to find out what can and can't be done.
Others exist to hold the beer.
Yakk wrote:skeptical scientist wrote:To be more precise, I meant the set \{x \in 2^\omega : K(n)-K(x |_n) \text { is unbounded}\} has measure 0. (Here x|_{n} denotes the string consisting of the first n bits of x, and K is prefix-free Kolmogorov complexity.)
So the complexity of the first n bits of x is bounded within a some distance of the complexity of the number n is required for something to be rejected.
Simple elements, such as "all zeros", fall in your set. In short, you need to have no more 'information' in the element than "the length of the sequence requested". So that would roughly be the set of elements that you can make a Turing machine T such that it is fed the number n, and output the first n digits of the number. Can I prove that? Hmm.
By the way, what measure? Ah, you want x to be a real number, so probably the standard measure on the reals.
Users browsing this forum: Google Feedfetcher and 7 guests