New User wrote:[story...]Have I committed suicide?
No, except in the sense that "noone steps in the same river twice", which isn't useful here. You've learned something. You've been changed
. You didn't decide
to believe something, you came
to believe something. That's an important distinction. People don't decide what to believe. They arrive at
The other important distinction is that people are not digital computers
. There is no "program" being run; there is no "me=/=royalty" line to modify. Our DNA contains "instructions", but not instructions for behavior, belief, or anything like that. DNA codes for proteins. That's pretty much it. Now, in the machine that is the human body, if these proteins are released at the right time, in the right circumstance, "things happen", and those things make other things happen.... and this cell divides but that one doesn't... and many many
layers up we get action in response to stimulus. Many many
layers above that, we get recognition of these actions, and many many
layers above that, the ability to take coherent action based on that recognition... and then to recognize what these actions mean... (in short, we eventually get to abstract thinking). If you want to consider the body as a computer, it is an analog
computer, similar to a Moog synthesizer as opposed to the digital ones now ubiquitous. There's a huge difference, the main one that applies here is that there is no "program" running that determines the output.
Reasoning that relies on the idea of a "program" running in a person's body fails.
AIs (of the kind that will matter) are similar. They are, in a sense, analog computers simulated by digital computers underneath. We program basic learning algorithms into them, but then set them loose to learn on their own, in a controlled environment. In doing so, the AI starts creating "agents" (algorithmic shortcuts) in its own "mind", that interact with each other, reinforce each other or cancel each other out depending on how "successful" they are at whatever we had set up. Those agents become far more important in deciding what to do, and then those agents create other agents, and it's really those agents that are doing the work. We will have no idea what those agents are doing or why. Neither will the AI
. (This is a basic mathematical theorem: essentially, no entity can model itself.)For this reason
the AI will not be able to just "change its programming". Programming is no longer what makes the thing tick.