>>13769023An AI that can predict at that level could also sedate or medicate, sidestepping the entire ethical dilemma. We're already building towards it with the attitude of the mental health industry and medication.
The future isn't going to be a violent and loud AI dystopia like a movie. That would be too exciting and give something to fight.
It's going to be a convoluted bureaucracy where you can't tell which steps were signed off by a human or signed off by an AI that stifles resistance by being too annoying to try to resist.
They aren't going to kill you, or throw you in a camp. They'll lower your credit score, your social credit score, threaten to cancel you if you don't fall in line, and the worst consequence is that your quality of life will be impacted juuust enough to be undesirable enough but not extreme enough to provoke violent rebellion. Basically, they will operate by the threat of a comfy office worker being demoted to call centers or fast food service.
This is how cancel culture operates right now already.