This forum is for the individual discussion thread that goes with each new comic.

Moderators: Moderators General, Magistrates, Prelates

I've been reading xkcd for a while, but this comic has gotten me to post. The sad part is that neither branch is really very satisfying. Yes, it is great to publish, but there are many ways to publicize results, and some ways are much better in terms of 3rd party (in)validation. Reading academic papers are a practically adversarial task, where the authors have incentive to make their idea look as good as possible, even if all they've done is make some niche-y tweak to something that already existed. It makes it difficult to find the actually useful bits.

Take 0x5f... for example. The technique for coming up with that magic number has existed for a while (practically by design of floating-point numbers), but it took the publishing of the Quake source code for people to notice it. This is way better than hiding your results in some subscription-only walled garden.

So, in that spirit, I thought I'd share some work that I did.

I found my own magic number for a related problem. I wanted to make a simple n-body gravity simulator, which means I'd need to compute something with an inverse square law as quickly as possible. If you have 2 points in space, you get the direction from their normalize vector difference (inverse square root), and a force proportional to the inverse square distance. Square distance is easy to compute (x^2 + y^2 = r^2), so that amounts to computing a reciprocal. Put those together and you want to compute (r^2)^-1.5. You could do this by computing (r^2)^-0.5 using the famous "Quake" technique and cubing the result, but I wanted to see if I could do this directly.

The key is the fact that floating point numbers are stored in a special scientific-notation format. As a result, interpreting the bits of a positive floating-point number as an integer is very roughly like taking a logarithm. From HS algebra (or your old-skool slide-rule skillz), multiplying the log of a number is like taking the power of the original number. For reciprocal square root, you want to multiply by -0.5. This is where the -(i >> 1) term comes from in the Quake code. In my case, what I want to do is -3*(i>>1).

Now, it is a matter of finding a magic number. Well:

Code: Select all
`0x9EB0BA3D -3*(as_int >> 1)`

Anyway, finding the magic number isn't very hard (HS algebra) if you understanding the mapping from floats to int. I wasn't as thorough in optimizing this number. I'm pretty confident in the 0x9EB... bit, but the 0x...BA3D is kind of out of my butt. I did a really simple brute force optimizer, which only looked at a few candidate points and minimized squared error (the other papers minimize relative error). But, good news, the approximation is good enough to make some plausible n-body animations.

There are, of course, subtleties to the whole thing. The Wikipedia article buries the lead somewhat. A better exposition (and generalization) is in the article "Floating-Point Tricks" by another graphics legend, Jim Blinn:

Yet another example of this technique is Kahan's fast cube root function, discovered by one of the architects of the floating-point spec.

Won

Posts: 1
Joined: Wed Nov 18, 2009 9:34 pm UTC

Dudely wrote:
littlelj wrote:
FireZs wrote:The Business part should be "My god, this code is utterly unreadable and unmaintainable! Re-write it until you can explain it to me in under a minute."

"You know that thing you needed doing? Well, I've done it and billed it out."

Usually works for me

You should always aim for your code to be readable and maintainable in a business situation. Most of the brilliance of your code in business comes from other people being able to read, modify, and reuse your code as easily as humanly possible (along with it being functional and not eating up more resources than it needs to). I maintain an application written by someone who did things the 'cool' way. . . I feel like driving his head through the wall.

Drat. I knew I should have re-emphasised that I don't code - this is prose, or maybe lookups or macros at best.

However, agreed. The few things I have had to maintain have generally required total rebuilds. Does nobody comment any more?!
Dudes, I'm a woman.

littlelj

Posts: 140
Joined: Wed Feb 18, 2009 10:40 am UTC

radtea wrote:The academic view is all wrong, though, both for reasons others have pointed out and because very little academic research is still published freely if there is any potential commercial value. The next move in the academic panel would be to call the university's IP office and get that sucker locked up, delaying publication and focusing the researchers on writing patent documents rather than academic papers.

I have brilliant, beyond-state-of-the-academic-art image registration code locked up inside commercial codebases, and I have brilliant, beyond-state-of-the-commercial-art cancer diagnostics stuff locked up inside university patent applications.

The patented stuff will eventually get out into the world, but then, so will the commercial codebase stuff, in some form or another.

I was wondering when patents would get mentioned.

The comic flies in the face of how patents are "supposed" to work. I was reluctant to bring it up for fear of sparking a flame-war about software patents.
Did you get the number on that truck?

phillipsjk

Posts: 1215
Joined: Wed Nov 05, 2008 4:09 pm UTC

Hey, another first time poster. I actually wrote the wiki article on Fast Inverse Square Root, after I was surprised to find we didn't have one. I'm not a comp. sci guy or a programmer, so part of the challenge was understanding the material enough to present a decent summary. If you folks think the article can be improved (and I know it can) go ahead and edit it! I can read some of the comments here and make tweaks, but those are going to be based on my limited knowledge. It would be much cooler to get you guys involved in editing it.

Protonk

Posts: 2
Joined: Wed Nov 18, 2009 10:54 pm UTC

On the magic number, a constant appearing in a fast calculation of the inverse square root of a float.

I've just read the wiki: http://en.wikipedia.org/wiki/Fast_inverse_square_root. Not so scientific, I know. But it seems that you can CALCULATE the magic number pretty easy. The idea behind the fast inverse square root is that one reduces the problem to an equality with logarithms. For x close to 0, the log(1+x) has a good linear approximation: log(1+x)~x. So one simply replaces the log with a much simpler function. Reading on in the wiki they get fuzzy about a term σ = 0.0450461875791687011756, that is needed so that... I don't really understand. It looks like as if it corrects an error in the estimate of the log. A better estimate would be log(1+x)~x+σ?

Anyway. Following the rest of the equalities is straightforward. The magic number, denoted by R, is just a constant appearing in the equation. The interesting thing, which is not in the wiki but you can figure it out yourself, is that R only depends on number of bits in the float you started with, the number of bits in that float that is reserved for the exponent of the float, and the term σ. Following the wiki, one can calculate R this way:

n=32 (32 bits float)
b=8 (8 bits in die float are reserved for the exponent)

In the calculations arise constants:
L = 2^(n − 1 − b)
B = 2^(b − 1) − 1

The magic number then turns out to be:
R=3/2 *(B-σ)*L where σ=0.0450461875791687011756

So calculator:
R=1597463012

R=5F3759E4

Close, but not exactly 5f3759df. It seems that the calculation and/or constants in the wiki are not quite right. No surprise there.

Posts: 6
Joined: Wed Nov 18, 2009 10:41 pm UTC

Protonk wrote:Hey, another first time poster. I actually wrote the wiki article on Fast Inverse Square Root, after I was surprised to find we didn't have one. I'm not a comp. sci guy or a programmer, so part of the challenge was understanding the material enough to present a decent summary. If you folks think the article can be improved (and I know it can) go ahead and edit it! I can read some of the comments here and make tweaks, but those are going to be based on my limited knowledge. It would be much cooler to get you guys involved in editing it.

Oh. It seems that I wrote my reply just when you posted yours. Nice that you wrote the wiki article. I'm impressed that you got this far without a sci/comp background.

Posts: 6
Joined: Wed Nov 18, 2009 10:41 pm UTC

Radical wrote:On the magic number, a constant appearing in a fast calculation of the inverse square root of a float.

I've just read the wiki: http://en.wikipedia.org/wiki/Fast_inverse_square_root. Not so scientific, I know. But it seems that you can CALCULATE the magic number pretty easy. The idea behind the fast inverse square root is that one reduces the problem to an equality with logarithms. For x close to 0, the log(1+x) has a good linear approximation: log(1+x)~x. So one simply replaces the log with a much simpler function. Reading on in the wiki they get fuzzy about a term σ = 0.0450461875791687011756, that is needed so that... I don't really understand. It looks like as if it corrects an error in the estimate of the log. A better estimate would be log(1+x)~x+σ?

I think I included σ because McEniry relied on it to motivate the rest of the illustration/proof (depending on how formal you want to be). I can explain things without resorting to the derived constant, but I didn't feel comfortable doing so. Walking through McEniry shows how σ is used connecting the approximation to the value of log_2(x). It's the sort of intermediate constant in fitting a good guess for the inverse sqrt in that it fits a good guess for the log(1+x).

I can probably clean that paragraph up though.

Edited for clarity
Protonk

Posts: 2
Joined: Wed Nov 18, 2009 10:54 pm UTC

lazarus89 wrote:Wow, condescending much?

Some people write code to get the job done, others do it for the recognition. I know what kind of coder I am, and now sadly, I know what Randall is too.

And some people want to benefit society, but due to copy-right/patent mumbo-jumbo, the moment you create the code during working hours, the code belongs to the company, and from what we have seen thus far, potentially locked down forever.
I am NOT a snake.

Opinions discussed are not necessarily the opinions of the people discussing them.
Kyrn

Posts: 937
Joined: Sat Sep 05, 2009 3:55 pm UTC
Location: The Internet

woktiny wrote:
Plasma Man wrote:Good and true comic. In my experience (in business), the only reward for being competent at your job is that your job becomes to fix other people's incompetence.

That IS my job. It's clear that as long as I work in this field, I'll never find the next 0xdeadbeef, erm... 0x5f3759df.

I can say, though, that in my ten years in business, I've had my fair share of experiences seeing brilliance unnoticed and "can you make that cell green" gratuitously rewarded. It can be awfully demoralizing.

Bike Shed*

* refresh the page once or twice
Try the Printifier for xkcd. You can now scale the comic between 50 and 150%.

I find these very useful: Common Errors in English Usage (web site) and Eats, Shoots & Leaves (book). You may, too.

e pluribus unum

dennisw

Posts: 438
Joined: Wed Nov 05, 2008 9:09 am UTC
Location: Appearing pro se AND pro bono!

I found this article on the current status of P=NP very interesting.

http://cacm.acm.org/magazines/2009/9/38 ... m/fulltext
"Pain is a wonderful emotion, scariest sea in a world of oceans" - Iris

spi

Posts: 213
Joined: Fri Mar 30, 2007 8:54 pm UTC
Location: Outskirts of Camberville

So many radically different responses, but either way I look at it, I'm really questioning my career decisions. In addition to the bleak images being painted of both business and academia, I'm in my third semester, and I really don't know what you guys are talking about half the time.

Indie game development is looking more enticing. Screw both worlds, I'll stay in my own!
Emperor_Z

Posts: 25
Joined: Wed May 14, 2008 10:39 am UTC

As far as I can see, the only rewarding position as a software developer is running your own (successful) business. Yes, it's hard to pull off and takes a lot of effort, but even if you fail you'll learn something out of it. Ask Carmack. He can probably work on everything he pleases without ever justifying what he does. He probably spent countless nights sitting in front of the screen before he released his first game, but I guess he enjoyed it, which is far more important than everything else.

Commenting on the two options displayed in the strip:

1) If you're working for some company, you're mostly fulfilling a vision of someone else. This can be cool stuff and/or you may earn loads of money. But the chance is high that your work's either underappreciated or you're under constant dead-line pressure or that you simply feel like living an unfulfilled life at the end of the day (if all three points apply to you... well, better get your act together and quit right now! You're a human, not an effing machine!).

2) If your jobs within academia, it guess the possibility is quite high that you can work on stuff that interests you and that you can share your creation with people who actually understand and support what you're doing. But there are also negative sides: less earnings if you're a starter, peer pressure and first and foremost politics (everything you do needs financial backing which you need to justify, a real pain). If you're lucky, your project may get some industrial interest, but what's the chance?
ZuBsPaCe

Posts: 1
Joined: Thu Nov 19, 2009 10:03 am UTC

Randall's job before becoming a full-time web cartoonist was at NASA, "getting poorly documented software libraries to talk to each other."1 I wonder whether he classifies NASA as academia or business.

1according to the introduction of xkcd volume 0
eboyce

Posts: 22
Joined: Thu Dec 06, 2007 11:38 pm UTC
Location: Sydney, Australia

exoteric

Posts: 4
Joined: Fri Nov 14, 2008 7:41 am UTC

phlip wrote:Yeah, the P vs NP thing is a matter of significant importance to computer scientists, reasonable importance to programmers in general, and potentially interesting to people who find maths interesting, but pretty much irrelevant to anyone else.

I'd say whether P=NP is pretty important to anyone who has used the internet even if they don't know it since the usefulness of encryption relies on the fact that P != NP for the purposes of any solution anyone has come up with.
mootinator

Posts: 69
Joined: Mon Mar 26, 2007 10:27 pm UTC

ZuBsPaCe wrote:As far as I can see, the only rewarding position as a software developer is running your own (successful) business. Yes, it's hard to pull off and takes a lot of effort, but even if you fail you'll learn something out of it. Ask Carmack. He can probably work on everything he pleases without ever justifying what he does. He probably spent countless nights sitting in front of the screen before he released his first game, but I guess he enjoyed it, which is far more important than everything else.

I can think of a few things more important, like having food to eat, a roof over his head and power to his computer.
mootinator

Posts: 69
Joined: Mon Mar 26, 2007 10:27 pm UTC

Dudely wrote:
littlelj wrote:
FireZs wrote:The Business part should be "My god, this code is utterly unreadable and unmaintainable! Re-write it until you can explain it to me in under a minute."
"You know that thing you needed doing? Well, I've done it and billed it out."

Usually works for me
You should always aim for your code to be readable and maintainable in a business situation. Most of the brilliance of your code in business comes from other people being able to read, modify, and reuse your code as easily as humanly possible (along with it being functional and not eating up more resources than it needs to). I maintain an application written by someone who did things the 'cool' way. . . I feel like driving his head through the wall.

I gotta agree with you there. In the practical business world, the human interaction is more important than anything else. Making your code clear and easy to read by anybody else who is going to look at it is important.

Communication

It's a skill a lot of people don't value high enough. I mean heck, you only have to look as far as the previous comic to see Randel praising a famous person, known primarially for their communication skills.
Gelsamel wrote:If you punch him in the face repeatedly then it's science.

Karilyn

Posts: 220
Joined: Thu Oct 15, 2009 6:09 pm UTC

mootinator wrote:
phlip wrote:Yeah, the P vs NP thing is a matter of significant importance to computer scientists, reasonable importance to programmers in general, and potentially interesting to people who find maths interesting, but pretty much irrelevant to anyone else.
I'd say whether P=NP is pretty important to anyone who has used the internet even if they don't know it since the usefulness of encryption relies on the fact that P != NP for the purposes of any solution anyone has come up with.

If we proved P = NP, we'd lose encryption.

And yet we'd win. P = NP (presuming a tractable algorithm, not a x^1 billion polynomial) would be an innovation that would make the entire information technology revolution so far look like a cave man making flint spear points next to a nuclear power plant.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

Yakk
Poster with most posts but no title.

Posts: 10212
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

westrim wrote:You mathy people will just have to accept that there are just some things that you aren't going to get, because they just aren't there.

You're not a mathy person until you get the things that just aren't there.
fibonacci

Posts: 21
Joined: Mon Jul 09, 2007 8:37 pm UTC

### Re: why no paper?

bleghsqueek wrote:What would prevent a programmer in business to try and get a paper published? Do their employers (the companies) own all the code that they write? If I was a programmer that wrote a brilliant bit of code that I think could be published, I'd check if I would need permission from the company, get it if needed (what company would not want to put a part of their software on the academic podium?), write a paper and and submit it to a journal...

Disclaimer: I am not a CS student and I have published exactly 0 articles in peer-reviewed journals.

You dont get papers published because you have good code, you get papers published because youve solved interesting and academically-relevant problems. Most things programmers do in the business world are inherently unpublishable (even if their code is brilliant) because they arent relevant to academic concerns. The guys who came up with the MVC paradigm and invented object-orientation etc didnt publish in academic journals because its not really academic work (even if its useful). You do sometimes get original and academically relevant ideas coming from the business world, but this tends to be commercial research (eg data-mining algorithms, stock market modelling techniques, etc) rather than programmers solving coding problems in a clever way.

Last edited by poohat on Thu Nov 19, 2009 7:43 pm UTC, edited 1 time in total.
poohat

Posts: 227
Joined: Mon Apr 07, 2008 6:21 am UTC

Hmm I do observe situations like Randal describes in the business world but I interpret them slightly differently:

When I was in Uni we spent a lot of time talking about how to "solve important problems well" - but by "important" we meant things like problems that have general applicability (i.e. searching, sorting) and by "well" we meant knowing things like increasing the efficiency of the algorithm.

In business you can still solve important problems well but by: "important" you mean ones that have the most effect on the bottom line ( so unglamorous work like fixing Exchange can take precedence ) and by "well" you mean "in a way that has the most value" (i.e. usually I interpret this to mean code that is maintainable - so clear and simple solutions can sometimes be favored over elegant - in the Chaitin sense - solutions.)

Mind you these are both what I believe are the ideal ethos of each group - as others have noticed both business and academia spend considerable times spinning their wheels.

For the record I currently work for the IT business *IN* academia - I'll leave it to the reader to decide if that's the best of both worlds or the worst.
sarkeizen

Posts: 18
Joined: Wed May 14, 2008 1:34 am UTC

Karilyn wrote:I gotta agree with you there. In the practical business world, the human interaction is more important than anything else. Making your code clear and easy to read by anybody else who is going to look at it is important.

Communication

It's a skill a lot of people don't value high enough. I mean heck, you only have to look as far as the previous comic to see Randel praising a famous person, known primarially for their communication skills.

True. Too, too true.
Try the Printifier for xkcd. You can now scale the comic between 50 and 150%.

I find these very useful: Common Errors in English Usage (web site) and Eats, Shoots & Leaves (book). You may, too.

e pluribus unum

dennisw

Posts: 438
Joined: Wed Nov 05, 2008 9:09 am UTC
Location: Appearing pro se AND pro bono!

tahrey wrote:Can anyone explain this inverse square magic number thing to me and the other curious, yet unwashed non-math-genius masses WITHOUT it descending into an inpenetrable fucking mess of algebra?

Okay. Not to put too fine a point on it, floating point numbers on modern computers are represented by bit fields to correspond to a x 10^b, like in scientific notation, except that instead of a being between 1 and 10 it's between 0 and 1 (there's also a sign bit, but since real square roots for negatives are undefined, we'll ignore it).

The inverse-square root of z is z^(-1/2), so if z=a x 10^b, it becomes a^(-1/2) x (10^b)^(-1/2). The right hand term simplifies to 10^(-b/2), which means we can just shift the exponent to the right one bit. This gets us the right order of magnitude for the answer, except for the sign, which we can fix by choosing the right magic number to subtract from.

To find the first term, the algorithm uses Newton's Method to approximate a^(-1/2) by finding c - a/2. Since division by 2 a shift to the right one bit, we've already covered that by finding the exponent. Then, we can find (either by trial and error or optimization calculations) a value for c that gives sufficiently close approximations, and then obtain a corresponding magic number that will also give us the necessary sign change for the exponential term.
dummy account

Posts: 11
Joined: Tue Apr 01, 2008 5:09 pm UTC

On the other hand...

Mutak

Posts: 1
Joined: Thu Nov 19, 2009 9:56 pm UTC

Radical wrote:Close, but not exactly 5f3759df. It seems that the calculation and/or constants in the wiki are not quite right. No surprise there.

IIRC the magic number from the q3 source works better (for q3) than the calculated optimum (i don't recall why - num of iterations/range of typical values/error margin)
raf

Posts: 1
Joined: Fri Nov 20, 2009 12:53 am UTC

I think it's something like this...

Imagine, to make the numbers simpler, the calculated optimum will calculate the correct answer to within +/- 0.1. And then, if the first guess is too high by 0.1, then the first iteration of Newton's method will be out by 0.01, but if the first guess is too low by 0.1, then the first iteration of Newton's method will be only out by 0.005. These numbers don't bear any resemblance to the actual numbers, it's just an example. Then by the analysis in the paper (which was just "maximal error"), it would have an error of 0.01 after one iteration of Newton's method.

Whereas, a different magic number that made it so that the initial guess might be high by 0.08 or low by 0.12... then the initial guess looks worse, because the maximal error is 0.12 instead of 0.1. But then, if the initial guess is too high, the first iteration of Newton's method might be out by 0.008, and if the initial guess is too low, the first iteration of Newton's method might be out by 0.007. Since it could by too low by more, the first-iteration-of-Newton's-method error is greater if the initial guess is too low... and similarly, the first-iteration error is less when the initial guess is too high. So now the maximal error after the first iteration of Newton's method is 0.008, which is better than the original case.

So yeah... the calculated number is calculated to minimise the error of the initial guess... and the way that error is calculated is such that a too-high error and a too-low error is equivalent, though they may not be. A more accurate calculation would weight a too-high error and a too-low error based on their effects on the final result.
While no one overhear you quickly tell me not cow cow.

phlip
Restorer of Worlds

Posts: 6964
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia

mootinator wrote:I'd say whether P=NP is pretty important to anyone who has used the internet even if they don't know it since the usefulness of encryption relies on the fact that P != NP for the purposes of any solution anyone has come up with.

Thank you! I was losing it here thinking... "Isn't P=NP like the basis for cryptography?" But none of the "maths" here mentioned it so just before I got to your post I finally conceded "What the hell was I thinking of?"
If the male mind truly were a machine it would consist of a shaft and a bushing.

Linux0s

Posts: 222
Joined: Sat Dec 29, 2007 7:34 pm UTC

William Gosset was working in the business world (Guiness Brewery) when he developed the first statistical method for mean estimation, in order to maximize barley production.

Of course, Gosset had to use a pseudonymn to publish his works, now called the t Distribution or Student's t, because a previous scientist working for Guiness published company secrets as part of his data.

CorruptUser

Posts: 4975
Joined: Fri Nov 06, 2009 10:12 pm UTC

Yakk wrote:And yet we'd win. P = NP (presuming a tractable algorithm, not a x^1 billion polynomial) would be an innovation that would make the entire information technology revolution so far look like a cave man making flint spear points next to a nuclear power plant.

Why is that? What would it allow us to do?
vultur-10

Posts: 10
Joined: Fri Nov 20, 2009 7:57 am UTC

mootinator wrote:
phlip wrote:Yeah, the P vs NP thing is a matter of significant importance to computer scientists, reasonable importance to programmers in general, and potentially interesting to people who find maths interesting, but pretty much irrelevant to anyone else.

I'd say whether P=NP is pretty important to anyone who has used the internet even if they don't know it since the usefulness of encryption relies on the fact that P != NP for the purposes of any solution anyone has come up with.

Problems in P can still be totally intractable, its not like theres any practical difference between an NP problem and one which is O(N^1000!)
poohat

Posts: 227
Joined: Mon Apr 07, 2008 6:21 am UTC

vultur-10 wrote:
Yakk wrote:And yet we'd win. P = NP (presuming a tractable algorithm, not a x^1 billion polynomial) would be an innovation that would make the entire information technology revolution so far look like a cave man making flint spear points next to a nuclear power plant.

Why is that? What would it allow us to do?

Basically, become GODS. You know how people like to say "Where's my flying car and robot maid?" If P=NP you'd have those things, and much much more.
FireZs

Posts: 375
Joined: Wed Sep 03, 2008 3:18 pm UTC

FireZs wrote:
vultur-10 wrote:
Yakk wrote:And yet we'd win. P = NP (presuming a tractable algorithm, not a x^1 billion polynomial) would be an innovation that would make the entire information technology revolution so far look like a cave man making flint spear points next to a nuclear power plant.

Why is that? What would it allow us to do?

Basically, become GODS. You know how people like to say "Where's my flying car and robot maid?" If P=NP you'd have those things, and much much more.

Try the Printifier for xkcd. You can now scale the comic between 50 and 150%.

I find these very useful: Common Errors in English Usage (web site) and Eats, Shoots & Leaves (book). You may, too.

e pluribus unum

dennisw

Posts: 438
Joined: Wed Nov 05, 2008 9:09 am UTC
Location: Appearing pro se AND pro bono!

Every mathematical problem with a short proof gets solved. The implications of this could be large.

Nearly every problem of scheduling becomes solved. Goods travel more efficiently.

Entire categories of physics and engineering and biology problems that are currently intractable become easy. Biochemistry, industrial design, applications of physics results are almost certainly going to fall out in droves (unless, of course, we have solved every problem in the categories that suddenly become easy already, which seems unlikely).

Entire categories of theories of everything become verifiable. The very way we do theoretical physics will change. The way we do mathematics will change. Know how mathematics managed to generate a bunch of applications that turned out to be useful? Well, almost every problem that mathematicians are currently trying to solve ... is solved overnight (ok ok, it might take a year: and we also might run into something extremely surprising, like our current system of mathematics is inconsistent). The amount of knowledge generated becomes ridiculous.

Oh, and I suspect we could solve the price problem, which makes capitalism obsolete.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

Yakk
Poster with most posts but no title.

Posts: 10212
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

westrim wrote:Oh. Hmm. Well, going from this line in the Wikipedia entry "In a 2002 poll of 100 researchers, 61 believed the answer is no, 9 believed the answer is yes, 22 were unsure, and 8 believed the question may be independent of the currently accepted axioms, and so impossible to prove or disprove", I'd go with the last 8, but simply because such a simple equation can't possibly account for all the variables.

But then, this is why I stay as far as possible from math with letters in it; after one too many exchanges of "why does it equal that?" "because it does, you are intended to know that type of equation and assume that value," I realized that math was as logical and straightforward for me as economics or sociology.

Aherm. Back to the point, P and NP both have nearly infinite variables, so formulating a rock solid equation is impossible. You mathy people will just have to accept that there are just some things that you aren't going to get, because they just aren't there.

I'm sorry, but I think you've fallen a bit victim to the oversimplified explanation here. "Easy" in this case is very specifically defined as what's called "polynomial time." The explanation of what that is is more than a bit technical, so I've embedded it in spoiler tags.

Spoiler:
Sudoku is actually a very bad example because it's too specific, and can actually be solved in what's known as linear constant time since one can just hash the starting positions; it was just given to help you with the idea of problems that can be verified more easily than solved. The problems we're dealing with have arbitrary, i.e. infinite, starting conditions, accompanied by some metric of how difficult each is to solve for - for instance, a sorting algorithm would use the number of elements. The efficiency of an algorithm is measured by how this metric relates to the time the algorithm takes to complete. Only the highest term matters, coefficients don't matter, and it works kind of like poker: factorial outranks exponential outranks polynomial outranks logarithmic outranks linear constant, because for sufficiently large numbers, they'll always fall into that order, no matter what. (So, if it takes n! + 1000^n, that second term doesn't matter; ditto for n^3 + n^2.) "Factorial" means that the time is proportional to the factorial of the complexity. "Exponential" means that it's proportional to some constant raised to the power of the complexity. "Polynomial" means that it's proportional to the complexity raised to some constant power. "Logarithmic" means that it's proportional to some logarithm of the complexity. "Linear constant" just means that it's always the same. Polynomial, logarithmic, and linear constant time all qualify as "P," or n log n time, or anything that doesn't get ahead of polynomial time for sufficiently large n.

It is already proven that if you were to find an algorithm that solved any one of a specific class of NP problems in polynomial time, you could adapt it to solve not only any of them, but any problem that can be verified in polynomial time. These are the "NP-complete" problems Randall references here, and he alludes to a specific one here (notice that the time given in "dynamic programming algorithms" is still exponential; "selling on eBay" is just a joke, one that breaks the actual problem's assumptions).

I'm not saying it's necessarily possible to prove or disprove (paging Dr. Gödel...), but it's less abstract than you seem to think.

(I should point out I'm technically an undergrad, and got lousy grades my first time around, so...yeah.)

EDIT: Okay, I know I've heard "linear" used to mean "constant" because I remember an exchange from sophomore year, but I'm looking it up...maybe things have changed since '06, I don't know. Also, I'd gotten it in my head that sudoku couldn't be adapted to arbitrary size without changing the rules in non-quantifiable ways, so...yeah. I don't know the difficulty surrounding nxn sudoku.
Last edited by SocialSceneRepairman on Sat Nov 21, 2009 6:39 pm UTC, edited 1 time in total.
SocialSceneRepairman

Posts: 200
Joined: Sat May 24, 2008 4:17 am UTC

Some people have objected that only three people will read those half dozen papers. That might be true, but if the three people are the ones adding it to Matlab, Mathematica, and Maple, you'll end up having a huge impact on industry and academia anyway (but only the computers will thank you).

The comic suggests the code has applications to queuing theory. Anyone want to hazard a guess what Randall had in mind?
snarkyxanf

Posts: 1
Joined: Sat Nov 21, 2009 7:12 am UTC

SocialSceneRepairman wrote:Sudoku is actually a very bad example because it's too specific, and can actually be solved in what's known as linear time since one can just hash the starting positions; it was just given to help you with the idea of problems that can be verified more easily than solved.

Sudoku on an nxn grid is NP-complete. Sudoku specifically on a 9x9 grid can be theoretically solved in constant time, as can any puzzle with only a finite number of inputs (asymptotic complexity doesn't even really apply here, there isn't even a value you could call the "size of the input", let alone one that you could take asymptotically to infinity). I'm not sure where you're getting linear from.

But the end of that sentence is right... it was just an example of a question that can intuitively be easy to verify but potentially hard to solve.
While no one overhear you quickly tell me not cow cow.

phlip
Restorer of Worlds

Posts: 6964
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia

4lph4num3r1c wrote:Academia: My goodness, this dramatically decreases the time to compute the answer-- but you didn't do it recursively (the slower way with more application overhead, yet an assignment requirement) -- sorry it doesn't count.

Unfortunately, most of this particular Computer Science program was a bunch of stupid puzzle problems and not as much real-life application. Fortunately, there are more schools and different academic programs.

Iterative constructs can be written recursively without chewing up stack space:
http://en.wikipedia.org/wiki/Tail_call

gcc supports tail recursion, but Microsoft's compilers do not.
dean.menezes

Posts: 134
Joined: Sat Nov 15, 2008 3:47 am UTC

vultur-10 wrote:Why is that? What would it allow us to do?

Well, if nothing else, those other five Millennium Prize Problems would just fall into place, because we'd be able to prove or disprove theorems by computer.

(Also, somehow I got it in my head that "linear time" rather unintuitively meant "constant time," with actual linear time just being "first-degree polynomial time." Whoops.)
SocialSceneRepairman

Posts: 200
Joined: Sat May 24, 2008 4:17 am UTC

SocialSceneRepairman wrote:
vultur-10 wrote:Why is that? What would it allow us to do?

Well, if nothing else, those other five Millennium Prize Problems would just fall into place, because we'd be able to prove or disprove theorems by computer.

Not quite. We'd be able to prove or disprove things if they have a short proof (in a full formal system).

This leaves the possibility that some things are true, but the shortest proof that proves them requires 10^20 steps (or, more extreme, more steps than there are atoms in the universe, times the number of Planck seconds between the big bang and the final proton decay...)

That does generate an interesting thought: is there a limit on how much longer the shortest proof for a statement can be as a function of the axioms and the length of the statement?
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

Yakk
Poster with most posts but no title.

Posts: 10212
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

I'd guess it's the sum of the falling powers of all the axioms and definitions in your system. (e.g., if there were 100 in all, 100 + 9900 + 98*9900 + ... + 100!)
SocialSceneRepairman

Posts: 200
Joined: Sat May 24, 2008 4:17 am UTC

PreviousNext