Uniform Probability Distribution Over All Natural Numbers

For the discussion of math. Duh.

Moderators: gmalivuk, Moderators General, Prelates

FancyHat
Posts: 341
Joined: Thu Oct 25, 2012 7:28 pm UTC

Uniform Probability Distribution Over All Natural Numbers

Postby FancyHat » Fri Nov 08, 2013 4:08 pm UTC

I gather that there can be no such thing as a uniform probability distribution over all the natural numbers. But when I've looked it up online, the proof has been essentially the following, if I remember correctly.

Assume there's a uniform probability distribution over all the natural numbers, that, for any two natural numbers m and n, P(X=m) = P(X=n). If such a uniform probability distribution exists, then it must be that Σi=0P(X=i) = 1. But either, for all n, P(X=n)=0, in which case Σi=0P(X=i) = 0, or P(X=n)>0, in which case Σi=0P(X=i) is infinite. Therefore, no such uniform probability distribution exists.

When I'd previously thought about the question of whether or not such a uniform probability distribution could exist, my thinking was as follows. Assume there's a uniform probability distribution over all the natural numbers. If we then pick a natural number at random, it will be surprisingly small, in that there will only be finitely many natural numbers smaller than it, but infinitely many bigger. The probability of getting a larger natural number is one, while the probability of getting a smaller natural number is zero. If we pick again at random, we will almost surely get a bigger number. A sequence of such numbers, picked at random, would almost surely be strictly monotonically increasing.

That we're guaranteed to get surprisingly small numbers, and the fact that every natural number is surprisingly small, strikes me as inconsistent with the assumption that we could have a uniform probability distribution over all the natural numbers in the first place.

I don't think this is the same as picking real numbers at random, with a uniform probability distribution, from an interval such as [0,1). Yes, any real number picked would be extremely unlikely, in that the prior probability of picking it would be zero, but you're guaranteed to pick a number with such a probability anyway. But something being extremely unlikely isn't the same as something being surprisingly small. In the real number [0,1) interval case, zero probabilities aren't enough to be almost sure of then getting a strictly monotonically increasing sequence.

But I'm not confident that my own thinking was correct. I haven't worked it out to a clear contradiction, such as 0=1.

But the kind of proof I've come across online just makes me wonder if real numbers are adequate for the kinds of probability distributions being considered in the first place. After all, in the real number [0,1) interval case, a probability of zero doesn't mean impossible, and a probability of one doesn't always mean surely, and often means almost surely instead. There seem to be infinitesimal differences that are significant, but smaller than the differences between any unequal real numbers. I'm therefore wary of the sort of proof I've encountered online.

But having said that, I know, from experience, that there's almost surely a reason why the kind of proof I found online is correct. I wondered if someone could post some helpful pointers, please?

:)
I am male, I am 'him'.

User avatar
Xanthir
My HERO!!!
Posts: 5410
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Uniform Probability Distribution Over All Natural Number

Postby Xanthir » Fri Nov 08, 2013 4:35 pm UTC

You're confusing discrete and continuous density functions.

You're right that each individual point of a continuous density function, such as a distribution over the reals in [0,1], has 0 probability. But each *range* of points has a meaningful probability (which may be 0, but may be >0).

The integers, by definition, can only have discrete density functions put on them, which are equivalent to taking a continuous density function and chopping it up into ranges surrounding each of your significant points. For integers, it's the same as taking a uniform distibution over the reals, and assigning the range [N-.5, N+.5) to each integer N.

So, each integer must have a *meaningful* probability; there's no escape hatch where you can defend all of them having 0 (infinitesimal) probability but summing to a non-zero value.

Now the proof applies cleanly. The uniform distibution over the integers must assign the same value to all integers, but the sum of the probabilities over all integers must equal 1. If P(n) == 0, then ΣP(n) == 0, and if P(n) > 0, then ΣP(n) == ∞. You can't win; neither possibility defines a valid probability density function, and so the uniform distribution must not actually exist.

(This is also a proof why you can't have a uniform distribution over all reals.)
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Flumble
Yes Man
Posts: 2248
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Uniform Probability Distribution Over All Natural Number

Postby Flumble » Sat Nov 09, 2013 1:31 am UTC

Can't you state the probability of one value (or a finite range in an infinite space) as a limit, P(x=n)=limt->0+t ?
Without 'solving' the limit, I'd say the whole thing sums to 1. The iffy part here is that limt->0+t is equal to 0 (at least in most definitions).

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

Re: Uniform Probability Distribution Over All Natural Number

Postby lalop » Sat Nov 09, 2013 8:48 am UTC

To add to the "no escape hatch", if you take a probability space to be a measure space (as is done, for example, in the wikipedia article), you get countable additivity; thus, you cannot add up a countable collection of points (e.g. any subset of N) of probability 0, and get anything besides 0. (This assumes that the points are events/"measurable", which I assume to be part of what you mean by "uniform". You can have a probability space whose sole events are P(∅) = 0 and P(N) = 1 if you want.)

Why countable additivity? I'm not sure (it might very well be a "dogma" as accused in the other thread), though this explanation might help some.

User avatar
jestingrabbit
Factoids are just Datas that haven't grown up yet
Posts: 5967
Joined: Tue Nov 28, 2006 9:50 pm UTC
Location: Sydney

Re: Uniform Probability Distribution Over All Natural Number

Postby jestingrabbit » Sat Nov 09, 2013 10:28 am UTC

Countable additivity gives you several results that seem intuitive, but which don't hold for Riemann's integral. For instance,

http://en.wikipedia.org/wiki/Dominated_ ... ce_theorem

is true for the Lebesgue integral, but not the Riemann. More than that, though, countable additivity closes the set of measurable sets wrt the topology generated by the metric d(A,B) = \mu(A \sd B) where \sd is the symmetric difference.

Finitely additive measures do have a place though. They're called means, and, on the natural numbers, they allow you to say things like "half the natural numbers are even". But, they're not so great with probability.
ameretrifle wrote:Magic space feudalism is therefore a viable idea.

User avatar
Qaanol
The Cheshirest Catamount
Posts: 3069
Joined: Sat May 09, 2009 11:55 pm UTC

Re: Uniform Probability Distribution Over All Natural Number

Postby Qaanol » Sat Nov 09, 2013 2:34 pm UTC

This discussion does bring up the question of how to treat an unknown distribution. For example, in one of those logic puzzles where the antagonist writes a natural number down and puts it in an envelope, you have no information about the method by which the number was selected. As such, you generally must treat all possible numbers as equally likely, even though there is no proper distribution which could effect that result, because for any specific distribution that might have been used, there exists a horizontally-shifted version of that distribution for each natural number.
wee free kings

User avatar
Flumble
Yes Man
Posts: 2248
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Uniform Probability Distribution Over All Natural Number

Postby Flumble » Sat Nov 09, 2013 4:34 pm UTC

Qaanol wrote:This discussion does bring up the question of how to treat an unknown distribution. For example, in one of those logic puzzles where the antagonist writes a natural number down and puts it in an envelope, you have no information about the method by which the number was selected. As such, you generally must treat all possible numbers as equally likely

In case of a human antagonist, you can assume it was likely a number below one thousand, more likely a number with less than 15 digits, even more likely a number with less than 5000 digits (for it to fit on a page that fits in the usual envelope) and extremely likely a number with less than 1,000,000,000 digits (for a no-lifer to write a digit a second half a day every day for 70 years). :P Or put otherwise: for a human choosing a number, you can assume a geometric distribution.

FancyHat
Posts: 341
Joined: Thu Oct 25, 2012 7:28 pm UTC

Re: Uniform Probability Distribution Over All Natural Number

Postby FancyHat » Sat Nov 09, 2013 6:16 pm UTC

Thanks for the replies!

I think I was being sloppy in my original post. I think I was trying to use the term 'probability' in two senses: real numbers in the interval [0,1]; and as an informal notion of relative likelihood.

I do get that Σi=1 0 = 0. And I do also get that limn->∞ Σi=1n 1/n = limn->∞ 1 = 1. And that limn->∞ 1/n = 0, of course. So, yes, I do understand that Σi=1 limn->∞ 1/n = Σi=1 0 = 0.

But, as illustrated by the concept of almost sure events as distinct from totally certain events, I also recognise there's a difference between something almost never happening, with probability zero, and something being impossible, with probability zero. For example, if I toss a coin repeatedly until I get tails, I'll almost surely get tails eventually, but it's still possible that I'll never get tails and never stop tossing. But if I do toss a coin, it's totally certain I'll get either heads or tails, and impossible for me to get neither.

So, despite the proof that there's no uniform probability distribution over all natural numbers, with probabilities, by definition, being real numbers in the [0,1] interval, I'm left wondering if that really does mean that the concept of being able to pick natural numbers, at random, where no natural number is any more likely (in an informal sense) than any other, implies a contradiction. Yes, I accept that we can't assign probabilities to all natural numbers for that, but I don't yet see how that means the concept I'm thinking about implies a contradiction itself.

This stuff reminds me of someone arguing that the square root of two must be rational, since you can get arbitrarily close to it with an endless sequence of rational numbers. But they'd be wrong. Proofs of the irrationality of the square root of two show that there's a meaningful distinction between rational and irrational real numbers, even though you can get them arbitrarily close to each other. The distinction between almost sure and totally certain events reminds me of this in a way, since it suggests that conventionally defined probabilities, real numbers in the [0,1] interval, aren't adequate for distinguishing between such cases. It's like something beyond real numbers is needed, something that includes infinitesimal differences.

I wouldn't accept that there's no square root of two on the basis that there's no rational square root of two. Similarly, I'm unconvinced by a proof about conventionally defined probabilities when they already seem inadequate for distinguishing between events that are truly impossible, and possible but which will almost never happen.
I am male, I am 'him'.

skullturf
Posts: 556
Joined: Thu Dec 07, 2006 8:37 pm UTC
Location: Chicago
Contact:

Re: Uniform Probability Distribution Over All Natural Number

Postby skullturf » Sun Nov 10, 2013 12:54 am UTC

It's fair to ask for something "close" to a uniform probability distribution on the natural numbers. We can't have uniformity together with countable additivity. However, can we get something "close" that still satisfies some of the intuitive properties of "probability"?

Here's one attempt. It's not countably additive, but it is finitely additive. It results in things like "probability that n is odd = 1/2" and "probability that n is a square = 0". (If we use the word "probability". Which some people wouldn't like, since countable additivity doesn't hold.)

http://en.wikipedia.org/wiki/Natural_density

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

Re: Uniform Probability Distribution Over All Natural Number

Postby lalop » Sun Nov 10, 2013 2:56 pm UTC

FancyHat wrote:The distinction between almost sure and totally certain events reminds me of this in a way, since it suggests that conventionally defined probabilities, real numbers in the [0,1] interval, aren't adequate for distinguishing between such cases. It's like something beyond real numbers is needed, something that includes infinitesimal differences.


I wonder what would happen if we defined probability in terms of the surreals. If I understand correctly, those contain ω, hence 1/ω which, if that probability were assigned to each of the natural numbers, would sum to 1.

Edit: looking through previous threads on the surreals, seems like a can of worms. Even I apparently knew something about them back in 2011. If anyone knows a good field that's not quite as large and has a 1/ω, that might be an alternative.

FancyHat
Posts: 341
Joined: Thu Oct 25, 2012 7:28 pm UTC

Re: Uniform Probability Distribution Over All Natural Number

Postby FancyHat » Sun Nov 10, 2013 3:21 pm UTC

lalop wrote:I wonder what would happen if we defined probability in terms of the surreals. If I understand correctly, those contain ω, hence 1/ω which, if that probability were assigned to all the natural numbers, would sum to 1.

I wondered about that kind of stuff, of which I know too little. One of the first thoughts that I had was this: what if we summed such probabilities for only those naturals that are perfect squares? Would they sum to one, since there's one perfect square for each natural number? Or would they sum to something less, since they're only some of the naturals? I'll have to read about surreal numbers - thanks for the link!

Edited to add:-

Whatever kinds of infinitesimals we try to use, aren't we always going to have the kind of problem where, with some sort of infinitesimal x, Σi=1x = Σi=1x + Σi=1x?
I am male, I am 'him'.

User avatar
jestingrabbit
Factoids are just Datas that haven't grown up yet
Posts: 5967
Joined: Tue Nov 28, 2006 9:50 pm UTC
Location: Sydney

Re: Uniform Probability Distribution Over All Natural Number

Postby jestingrabbit » Sun Nov 10, 2013 5:03 pm UTC

A nicer place to start might be the hyperreals.

http://en.wikipedia.org/wiki/Hyperreal_number

If you're clever, you can use the transfer property with stuff that you've "hypered" and it doesn't get too weird. I mean, yeah, its weird, but its less weird than the surreals. I think I remember an appendix about hyper-measures and what not... but obviously it didn't sink in at all.
ameretrifle wrote:Magic space feudalism is therefore a viable idea.

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

Re: Uniform Probability Distribution Over All Natural Number

Postby lalop » Mon Nov 11, 2013 1:36 am UTC

The hyperreals were my first hope since they're the tamest, but I don't think they work in this case. If there were some number like 1/ω, then there would exist a least n in N* such that sum{m <= n}1/ω >= 1, hence you could define N = {1,...,n}, contradicting that N is an external set.

FancyHat
Posts: 341
Joined: Thu Oct 25, 2012 7:28 pm UTC

Re: Uniform Probability Distribution Over All Natural Number

Postby FancyHat » Wed Nov 13, 2013 3:43 pm UTC

lalop wrote:If there were some number like 1/ω, then there would exist a least n in N* such that sum{m <= n}1/ω >= 1, hence you could define N = {1,...,n}, contradicting that N is an external set.

Is the idea to be able to say stuff like Σiω 1/ω = 2 Σiω/2 1/ω?

Qaanol wrote:This discussion does bring up the question of how to treat an unknown distribution.

I think that might be how I first wondered if it could be possible to have a random source of natural numbers where no number was any more likely than any other.

For example, in one of those logic puzzles where the antagonist writes a natural number down and puts it in an envelope, you have no information about the method by which the number was selected. As such, you generally must treat all possible numbers as equally likely, even though there is no proper distribution which could effect that result, because for any specific distribution that might have been used, there exists a horizontally-shifted version of that distribution for each natural number.

That distinction, between the actual method and uncertainty about it, seems important. Even if such a method isn't actually possible, there's still the uncertainty about what the method is.

But I wonder if, even as a matter of maximum uncertainty, we'd have to treat all possible numbers as equally likely. With ℕ={1, 2, 3, ...}, there's only one distribution where P(X=1)=1, and infinitely many where P(X=1)<1. If P(X=1)=p1<1, for each p1, there's only one distribution where P(X=2)=1-p1. And, where pi=P(X=i), for each p1 and p2 where p1+p2<1, there is only one distribution where p3=1-(p1+p2). And so on. Could this, somehow, lead to something other than all possible numbers being equally likely?

One immediate thought is that for each nonuniform distribution, there are infinitely many similar distributions that are equivalent if you map from natural numbers to natural numbers. For example, the distribution where P(X=1)=2/3, P(X=2)=1/3, and P(X>2)=0 can be mapped to the distribution where P(X=1)=1/3, P(X=2)=0, P(x=3)=2/3, and P(X>3)=0 by mapping X=1 to X=3, X=2 to X=1, and X=3 to X=2. This suggests that, knowing nothing about the method from which these natural numbers are drawn, all possible numbers would have to be treated as equally likely. But, we can only map natural numbers to natural numbers, which makes me wonder if the suggestion might be wrong. I have a sense of a question being begged, but I'm unable to express it clearly.

Consider the distribution where P(X=n)=1/2n. We can shift that distribution along by k places, to get P(Xk)=0 and P(X=n+k)=1/2n, but we can't reverse that distribution so that probabilities double rather than halve when we add one to n. What I'm wondering is if that kind of asymmetry means some distributions are more likely than others, in such a way that, even if we know nothing about the method being used, we'd be somehow wrong to treat all possible numbers as equally likely. I'm wondering if this kind of asymmetry means that, when mapping natural numbers to natural numbers to get different probability distributions, some distributions are, in a sense, more likely than others.

I think I'm basically wondering if looking at probability distributions of probability distributions would help, in the sense of asking, "What's the probability that the unknown method has this particular probability distribution?"
I am male, I am 'him'.


Return to “Mathematics”

Who is online

Users browsing this forum: No registered users and 8 guests