Moderators: phlip, Prelates, Moderators General
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.
Actually, there is: Neuron-based life forms have been observed in rare cases to exhibit intelligence. (Since it appears you tend to hang out with Yudkowskybots, it's understandable you might not be aware of this.)WarDaft wrote:Not to mention that there's no evidence that the operations of neurons aren't computable by a Turing machine as well (by less in fact, but we don't need that for this particular case)
So "does this program halt or not" is inconceivable? I don't think that word means what you think it means.there is considerable reason to believe that any operation you can conceive of can be represented on a TM's tape
OK, I'm not going to try to prove that I can solve the halting problem in the general case (even if I can, it would take infinitely long to demonstrate), so here's a different problem: let's see a Turing machine and input tape to generate secure 128-bit symmetric encryption keys, something I can do quite easily. (If it generates the same key every time, that's not secure).and that it can do everything with that operation that you actually can.
Goplat wrote:here's a different problem: let's see a Turing machine and input tape to generate secure 128-bit symmetric encryption keys, something I can do quite easily. (If it generates the same key every time, that's not secure).
Goplat wrote:Actually, there is: Neuron-based life forms have been observed in rare cases to exhibit intelligence. (Since it appears you tend to hang out with Yudkowskybots, it's understandable you might not be aware of this.)WarDaft wrote:Not to mention that there's no evidence that the operations of neurons aren't computable by a Turing machine as well (by less in fact, but we don't need that for this particular case)
I'd love to see that. I've never heard of a human generating randomness remotely as well as even the crappiest (artificial) PRNG, let alone better than any turing machine can.[/quote]So "does this program halt or not" is inconceivable? I don't think that word means what you think it means.there is considerable reason to believe that any operation you can conceive of can be represented on a TM's tapeOK, I'm not going to try to prove that I can solve the halting problem in the general case (even if I can, it would take infinitely long to demonstrate), so here's a different problem: let's see a Turing machine and input tape to generate secure 128-bit symmetric encryption keys, something I can do quite easily. (If it generates the same key every time, that's not secure).and that it can do everything with that operation that you actually can.
troyp wrote:I'd love to see that. I've never heard of a human generating randomness remotely as well as even the crappiest (artificial) PRNG, let alone better than any turing machine can.
Wait, your saying it's impossible for a Turing Machine to model intelligence? No matter what? That's silly. It requires that the universe be fudamentally incomputable, not just non-deterministic (which can be trivially simulated by an NDTM, and thus by a DTM) which makes it a far more scary and confusing place than we would like to think. We don't have many ways of describing things where the rules that govern the syntactical correctness of our description are incomputable. And by not many, I mean zero. To find out that the universe fundamentally has no computable set of laws would be akin to finding out the moon is in fact made of cheese. And that it's angry cheese. No, actually, I think it would be worse.Goplat wrote:Actually, there is: Neuron-based life forms have been observed in rare cases to exhibit intelligence. (Since it appears you tend to hang out with Yudkowskybots, it's understandable you might not be aware of this.)WarDaft wrote:Not to mention that there's no evidence that the operations of neurons aren't computable by a Turing machine as well (by less in fact, but we don't need that for this particular case)
I think it means exactly what I think it means. Many axiomatic mathematical systems are computable. Many of these systems can formulate the halting problem. Therefore, you can represent the question on a Turing Machine. You just can't answer it, in general, dependably, with one.Goplat wrote:So "does this program halt or not" is inconceivable? I don't think that word means what you think it means.WarDaft wrote:there is considerable reason to believe that any operation you can conceive of can be represented on a TM's tape
I'm not going to give you that, TMs are a real bother to program in. Instead, I'm going to put you in a plain white box, and take a perfect snapshot of it (pretend this doesn't violate QM for a moment) and then let you out and ask you for a key - you will be provided with a computer. Then, later I'm going to instantly fabricate a perfect copy of you from the snapshot, and ask you for the key again, in a manner fundamentally indistinguishable from the first time (again assume no interference from QM). You have 24 hours before I put you in the box to plan some mental tactic that will ensure that you give me two different keys. There will be no static from the environment, no random variations, you are the only possible source of two different keys. And I'm going to do the second one at a remote, undisclosed location, so that the 'first' you cannot interfere from the outside.Goplat wrote:OK, I'm not going to try to prove that I can solve the halting problem in the general case (even if I can, it would take infinitely long to demonstrate), so here's a different problem: let's see a Turing machine and input tape to generate secure 128-bit symmetric encryption keys, something I can do quite easily. (If it generates the same key every time, that's not secure).WarDaft wrote:and that it can do everything with that operation that you actually can.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.
EvanED wrote:troyp wrote:I'd love to see that. I've never heard of a human generating randomness remotely as well as even the crappiest (artificial) PRNG, let alone better than any turing machine can.
Well, in Goplat's defense, if you're (unfairly, according to me) not allowing the TM to have a variable seed, then even the best PRNG is pretty bad. :-)
One thing an intelligent being can do is solve arbitrary logic puzzles, like the ones posted three forums down from here. One type of logic puzzle is "does program X ever halt"; this always has a definite yes-or-no answer, but it has already been proven that no Turing machine is capable of correctly solving all of them.troyp wrote:Since when is intelligence evidence of supra-turing computational abilities??
Not only is that speculative, it seems highly improbable (at least to me)
Yes. It's true by definition under the strongest standard of intelligence, and empirically seems to be true under weaker definitions.WarDaft wrote:Wait, your saying it's impossible for a Turing Machine to model intelligence? No matter what?
But how many universes are there whose laws have been described completely and found to be computable? Even allowing hypothetical universes, as long as intelligence is proven possible in them, the answer is again zero. You can hardly say that an incomputable universe is unthinkable based on evidence, because there is none. And is it really any more scary and confusing than the idea that there exists a logic puzzle, with a definite yes-or-no answer, that you would never be able to solve even given unlimited time? Being inside a universe where everything is computable does imply that.That's silly. It requires that the universe be fudamentally incomputable, not just non-deterministic (which can be trivially simulated by an NDTM, and thus by a DTM) which makes it a far more scary and confusing place than we would like to think. We don't have many ways of describing things where the rules that govern the syntactical correctness of our description are incomputable. And by not many, I mean zero.
Goplat wrote:One thing an intelligent being can do is solve arbitrary logic puzzles, like the ones posted three forums down from here. One type of logic puzzle is "does program X ever halt"; this always has a definite yes-or-no answer, but it has already been proven that no Turing machine is capable of correctly solving all of them.
Goplat wrote:Or take another thing considered to be characteristic of intelligence, use of natural language: no program so far has been capable of recognizing either spoken or written words with accuracy above even an unintelligent human (even recognition of printed words is often pretty poor). And attempts at programs that try to carry out conversation have thus far been utterly laughable.
enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵╰(ಠ_ಠ ⚠) {exit((int)⚠);}
Then humans are not intelligent beings.One thing an intelligent being can do is solve arbitrary logic puzzles, like the ones posted three forums down from here. One type of logic puzzle is "does program X ever halt"; this always has a definite yes-or-no answer, but it has already been proven that no Turing machine is capable of correctly solving all of them.
Yes. Yes yes. Yes yes yes yes yes. If the laws of physics are incomputable, we will never learn them. They will require some operator whose actions we cannot describe without an infinite amount of information. One beautiful thing about QM is that it rejects the hidden variable hypothesis, otherwise, we would live in just such a world.And is it really any more scary and confusing than the idea that there exists a logic puzzle, with a definite yes-or-no answer, that you would never be able to solve even given unlimited time?
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.
Goplat wrote:Yes. It's true by definition under the strongest standard of intelligence, and empirically seems to be true under weaker definitions.
Alan Turing wrote:In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.
phlip wrote:Goplat wrote:One thing an intelligent being can do is solve arbitrary logic puzzles, like the ones posted three forums down from here. One type of logic puzzle is "does program X ever halt"; this always has a definite yes-or-no answer, but it has already been proven that no Turing machine is capable of correctly solving all of them.
So, you know an intelligent being who is capable of determining whether a given TM halts, every time? Neat! Can you ask them what BB(5) is? Many people would love to know!
jareds wrote:The Lucas-Penrose argument requires very extreme beliefs about human abilities. It doesn't necessarily require personal arrogance, but it does require arrogance about the abilities of humanity as a whole. (Penrose more or less argues that the human mathematical community could be infallible if it were really careful.) This is not to say that the belief that humans aren't TMs is inherently nuts, but the argument in question is really dubious. Yes, Penrose is probably smarter than me, but that doesn't mean he can't make a nutty argument. I'm sure that there are people smarter than me who would argue that the Bible is inerrant because the Bible says that the Bible is inerrant.
Yakk wrote: Claims that humans can solve the halting problem on arbitrary TMs is laughable in this light.
HungryHobo wrote:For one there's never been a computer built in the physical universe to which the halting problem actually applies.
korona wrote:If a human can determine whether a given turing machine/computer program halts and he/she can explain the reason to another human then this explanation can be converted to a proof in your favorite theory (ZF for example) that the turing machine halts. If there is a proof in such a theory it can be found by an automated theorem prover, so a turing machine can also determine whether the initial program halts.
This procedure only fails if the human is unable to explain why the program halts or if the explanation cannot be converted to a proof in a suitable proof system because it uses non-standard and/or infinite first-order axioms.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.
WarDaft wrote:The assumptions the TM makes about halting are essentially axioms, and define what it can see to be true if it included an actual proof writing aspect. It just overturns them more than we do. Well maybe, we might do a lot of throwing out of axioms in the limit.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.
jareds wrote:This is kind of unfair. The non-computable-human camp aren't outright illogical.
jareds wrote: I'm sure they understand that halting-oracle humanity can't be limited to proofs in a fixed formal system--that's almost exactly their point!
WarDaft wrote:The TM is just more obviously naive than we are. We don't know that PA is consistent, the TM is just doing less work to detect inconsistent axioms and considering more at any particular time.
troyp wrote:The more sophisticated ones, anyway*. But this isn't saying much: we're talking about an empirical proposition, so of course it's possible to make logically consistent arguments to support it.
I'm saying we're not infallible at it, but no one actually expects (for example) PA to have any contradictions show up, even though it's still entirely possible for it to happen at almost any point. There are lots of theorems we accept as "true" because they've been proven in PA. We don't absolutely know the rules we have work, but we can iteratively (and up to the thermodynamic limit, indefinitely) discard ones that we find don't work. So eventually we will have correct but not fully justified beliefs about the halting of various machines, because we've 'proven' them. We don't know when we actually have them, we just tend to assume the current proofs we have are in consistent systems.You are the one who said that my original (2), which stated that humanity would not "accept[] any [axioms and/or formal systems] that were not true/sound (or at least 1-consistent)", was "vaguely reasonable".
(2) was an attempt to state the heart of the ridiculous implication of the argument that humans are non-computable because of the halting problem and/or incompleteness theorem. If you're saying that humanity is not infallible at weeding out inconsistent theories, then of course I agree, but that's a serious problem for (2).
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.
Users browsing this forum: No registered users and 2 guests