Assume there's a uniform probability distribution over all the natural numbers, that, for any two natural numbers m and n, P(X=m) = P(X=n). If such a uniform probability distribution exists, then it must be that Σ

_{i=0}

^{∞}P(X=i) = 1. But either, for all n, P(X=n)=0, in which case Σ

_{i=0}

^{∞}P(X=i) = 0, or P(X=n)>0, in which case Σ

_{i=0}

^{∞}P(X=i) is infinite. Therefore, no such uniform probability distribution exists.

When I'd previously thought about the question of whether or not such a uniform probability distribution could exist, my thinking was as follows. Assume there's a uniform probability distribution over all the natural numbers. If we then pick a natural number at random, it will be surprisingly small, in that there will only be finitely many natural numbers smaller than it, but infinitely many bigger. The probability of getting a larger natural number is one, while the probability of getting a smaller natural number is zero. If we pick again at random, we will almost surely get a bigger number. A sequence of such numbers, picked at random, would almost surely be strictly monotonically increasing.

That we're guaranteed to get surprisingly small numbers, and the fact that every natural number is surprisingly small, strikes me as inconsistent with the assumption that we could have a uniform probability distribution over all the natural numbers in the first place.

I don't think this is the same as picking real numbers at random, with a uniform probability distribution, from an interval such as [0,1). Yes, any real number picked would be extremely unlikely, in that the prior probability of picking it would be zero, but you're guaranteed to pick a number with such a probability anyway. But something being extremely unlikely isn't the same as something being surprisingly small. In the real number [0,1) interval case, zero probabilities aren't enough to be almost sure of then getting a strictly monotonically increasing sequence.

But I'm not confident that my own thinking was correct. I haven't worked it out to a clear contradiction, such as 0=1.

But the kind of proof I've come across online just makes me wonder if real numbers are adequate for the kinds of probability distributions being considered in the first place. After all, in the real number [0,1) interval case, a probability of zero doesn't mean impossible, and a probability of one doesn't always mean surely, and often means almost surely instead. There seem to be infinitesimal differences that are significant, but smaller than the differences between any unequal real numbers. I'm therefore wary of the sort of proof I've encountered online.

But having said that, I know, from experience, that there's almost surely a reason why the kind of proof I found online is correct. I wondered if someone could post some helpful pointers, please?