Ulc wrote:I think this topic lacks something very basic, but the absence makes the topic nonsensical.
Namely, please define what an AI is. Because from the description you give, it seems to be "a set of algorithms" for each possible circumstances - if it doesn't possess free will (the capability of saying "oh fuck you, I'm taking my toys and going home" to it's make, even if it's a remote chance), and is unable to feel pain, pleasure or emotion of any kind, I don't really consider it "intelligence", but rather "algorithms for number crunching".
Let's just say that it's a massive and complex neural network with several subnetworks designed to work together in a manner similar to the human brain, with one subunit in particular controlling the growth, development, and organization of the network, designed from an originally small network state with the essential framework for growth, and trained to learn and act intelligently. Let's also say that modern computers are capable of running it. Maybe it executes it in the 'magic smoke' processing megamatrix, because such a thing exists.
Technically, that would be a set of algorithms, but it would be a very interesting set of algorithms that all interfered with each other in a way that would make a very nice screensaver and/or robot slave.
fr00t wrote:Honestly, your question assumes/glosses over so much that there isn't a meaningful answer. Or maybe "depending on implementation, anything conceivable and some things that aren't". What does obedient mean? Why can't it have emotions? What if they program it to have its own agency, desires, and sense of purpose? Why isn't it twice as smart as a human or a thousand, and by what metric, or is there some meaningfully objective scale of intelligence?
Obedience means it does what it's told, and doesn't what it isn't. It can't have emotions because emotions are destabilizing things and they would make its invention harder. Giving it agency of its own would require that it had desires, and I suppose if it were ordered to it would probably be able to modify its decision-making to account for a whole new subsystem. A sense of purpose would be about the same as desires, just on a stronger level. I don't believe there is a meaningfully objective scale of intelligence, but I don't believe that comparing intelligence matters, just speed and correctness.
fr00t wrote:My actual opinion on this subject in general, which I would not usually espouse in meat-space, is that (probably not in my lifetime), AI will manifestly change the human experience by a degree unprecedented by prior technological development. It's not a question of "how many jobs will AI displace" but more like "how long until the last organic human mind is digitized". I hope, in a weird sort of quasi-mystical way, that the transition is done with grace and efficiency, and that we keep the good human values like curiosity and love and leave behind the bad ones, like superstition and hierarchy; but ultimately I have little sentimentality towards my society and species, with our pressurized fluid sack bodies and brutish internal combustion engines.
In short, I don't think that there is a chance that the popular sci-fi interpolation of AI will come true, wherein monkeys fly space ships around and all the AI does is brew coffee and fail to understand sarcasm.
Nice way to put it. I believe in human digitization as well (I think it's inevitable), but I was thinking more about how technology like this would change our society as it is today. Assuming that the AI has no anthropomorphic desires/biases/emotions, at least to begin