Eternal Density wrote:We should totally make an application which scrapes all the first links in all wikipedia articles and finds all the loops and then determines what percentage of articles are directed to each loop. For science. And so we know how correct/incorrect Randall is.
That, in fact, is the only way to get something more then "wow, that article does go to the philosophy-loop, too!".
And it should not be done in the english wikipedia, as that was manipulated two times now (2008 and 2011).
For any given time t, a given wikipedia language and some fixed set of rules how to follow links:
You can eliminate dead ends by defining that these articles link to itself, so every article will lead to a loop.
There could be only one loop in the whole wikipedia, then every article would lead to that. But in a general case, there are more loops.
Then, you can label these loops and divide the whole set of wikipedia articles in subsets, where each article in the subset ends in the same loop.
These subsets have a well-defined structure, you can plot them as a central loop and ingoing lines from other articles linking to an article in the loop (or from articles linking to articles linking to the loop and so on).
The distribution of the size of these subsets is an interesting thing... I think that you will always find few really big ones (with mathematics, physics, ..., philosophy, language and other basic things inside) and a large number of small ones, maybe just a loop of 2-3 articles with ~5-10 articles going to that loop.
That is not restricted to wikipedia, of course. You can do it with every website, with the whole www (ok, will be difficult to map the whole visible www at one fixed moment t before any links change
), with any finite set of natural numbers and a function mapping them on itself, ...
If the set is not finite (like the natural numbers), these loops can transform to infinite series without repetition, and there can be an infinite number of these... but it is possible to define similar structures there.