mikewhite wrote:I think a major point missed by the article is that the SAT is not scored on a completely linear scale, and a perfect score does not correlate with having all the answers correct. This flaws the whole article, it's answering a different question (a much easier one I might add) than originally asked.

Several factors are taken into account, not just how many questions an individual got correct but also how everyone else preformed. I think it would have been interesting to investigate how the bell curve for the random scores would be fitted given random answers (would it be a Normal distribution by central limit thm?) , what is the probability is that an individual would be given a perfect score (1600 or whatever it is) given the random answers, and then also suppose everyone else in the world only supplied random answers to the test, what would I need to have scored in the normal SAT test to have achieved a perfect score in that scenario (I'd imagine anything greater than a 1000 in normal conditions would probably get you there, but it would be fun to learn the actual answer!).

I feel this was a halfhearted attempt at the question at best, but given my background in Math and statistics I probably hold it to higher standards. Also it would probably take a whole days worth of work by a both intelligent and motivated person to get all that info, if it was even possible to get in the first place.

I know people have said this earlier in the thread, but I thought I'd flesh it out with suggestions of what I would have wanted to see. Sadly I'm too busy with work to hunt down the answers myself...

I was pretty disappointed by this entry for exactly these reasons. I believe SAT scores are scaled such that 500 is the average score and 100 is the standard deviation. I don't think it's perfect (perhaps intentionally so), but some effort is made toward that effect. Assuming the College Board would continue to abide by these rules despite a much denser score distribution than normal, this would imply that, for each individual section, one would have a 0.13% chance of getting a perfect score (that is, their score would lie 3 standard deviations above the mean.) Thus because the person is guessing randomly, the sections are independent, we would have a 0.13^3 = 0.00246%. Apparently the percentile for a 2400 (as of 2006, according to Wikipedia) is 99.98. So, when people take the test normally, 0.02% get a perfect score. This increase makes sense because a person's results on the three sections is not independent. A person whose intelligence lies "three standard deviations above the mean" (whatever that means) would have a relatively good chance of getting a perfect score on each section.

The difference between my calculated score is a factor of 10, but I think it would be even bigger. Perhaps there are some statistical subtleties due to the tests having different number of questions. (I simply thought of it as: (chance of performing three SDs above mean)^3). Also, the reported rate of 2400s is below the chance of simply of being three standard deviations above the mean, so the official scoring likely isn't as rigidly ruled as we would like to believe.