No.9438249 ViewReplyOriginalReport
Why are the dangers of AI not being focused on by governments? This is nuke level technology. Potentially far, far worse. It's obvious that pursuing artificial general intelligence will likely end in failure.

What species less intelligent that humans can predict/control a humans actions? I'll tell ya, NONE. There is no way we will be able to control anything with superhuman intelligence.

It's quite possibly the only thing in the universe we could never gain control of. Time travel? Yeah we'll prob work that one out. Immortality? Easy just do some CRISPR shit. Near infinite clean energy? Give us 50 years we'll have that nailed. Control an AI with an IQ that doubles every 12 hours? No chance. NO FUCKING CHANCE.

That's like asking an ant to control Elon Musk. At best he keeps it as a pet in a cage, at worst he kills it because it's in the way of his Gigafactory.

Guys I'm panicking. What is the good case scenario with AI? Living as immortal pets? Being placated by custom VR universe that the AI's built while they get on with the real work? Is that it!? Is that best case scenario?? FUCCKKK that's just the best case scenario, what about all the bad ones?