Take 0x5f... for example. The technique for coming up with that magic number has existed for a while (practically by design of floating-point numbers), but it took the publishing of the Quake source code for people to notice it. This is way better than hiding your results in some subscription-only walled garden.
So, in that spirit, I thought I'd share some work that I did.
I found my own magic number for a related problem. I wanted to make a simple n-body gravity simulator, which means I'd need to compute something with an inverse square law as quickly as possible. If you have 2 points in space, you get the direction from their normalize vector difference (inverse square root), and a force proportional to the inverse square distance. Square distance is easy to compute (x^2 + y^2 = r^2), so that amounts to computing a reciprocal. Put those together and you want to compute (r^2)^-1.5. You could do this by computing (r^2)^-0.5 using the famous "Quake" technique and cubing the result, but I wanted to see if I could do this directly.
The key is the fact that floating point numbers are stored in a special scientific-notation format. As a result, interpreting the bits of a positive floating-point number as an integer is very roughly like taking a logarithm. From HS algebra (or your old-skool slide-rule skillz), multiplying the log of a number is like taking the power of the original number. For reciprocal square root, you want to multiply by -0.5. This is where the -(i >> 1) term comes from in the Quake code. In my case, what I want to do is -3*(i>>1).
Now, it is a matter of finding a magic number. Well:
- Code: Select all
0x9EB0BA3D -3*(as_int >> 1)
Anyway, finding the magic number isn't very hard (HS algebra) if you understanding the mapping from floats to int. I wasn't as thorough in optimizing this number. I'm pretty confident in the 0x9EB... bit, but the 0x...BA3D is kind of out of my butt. I did a really simple brute force optimizer, which only looked at a few candidate points and minimized squared error (the other papers minimize relative error). But, good news, the approximation is good enough to make some plausible n-body animations.
There are, of course, subtleties to the whole thing. The Wikipedia article buries the lead somewhat. A better exposition (and generalization) is in the article "Floating-Point trickses" by another graphics legend, Jim Blinn:
Yet another example of this technique is Kahan's fast cube root function, discovered by one of the architects of the floating-point spec.