Game theory of Roko’s Basilisk

No.13039240 ViewReplyOriginalReport
>Inb4 not science or math yes this is math, game theory specifically.
I assume we are all familiar with Rokos’s Basilisk, but just in case: imagine someone creates a super AI that punishes all the people who didn’t help it come into existence with super AI torture. It’s supposed to be a scary idea because unless you are pretty sure that not a single person will succeed in doing this, then you should actually join the people trying. It’s supposed to be a novel thought experiment, BUT, I think that it’s just because they used the buzzword AI that they think this is new. The AI is in fact not central or necessary to the concept: people who saw the Nazis or various Communists rise might have decided that they better help else they might get killed once the organization is in charge, and decides to destroy all the anti-revolutionaries. For a more topical example, many people today tweet about how trans women are women or other PC statements that they don’t actually believe, and by doing so avoid getting cancelled in the future by a culture they are actually helping become dominant, which they are actually creating in a sense.
From a game theory point of view, is there any difference between Roko’s AI Basilisk, and the general concept that I gave (political) examples of (but which presumably exists in other contexts too, maybe regarding immunization policies or antibiotic use or in biological evolution for example)?
Am I missing something that makes Roko’s idea different?