I've been reading the dialogues between Eliezer Yudkowsky & MIRI and various members of LessWrong on LessWrong, and my current model of Eliezer's belief is roughly "on current trajectory there is over 90% probability that some form of advanced artificial intelligence will kill all humans and it most likely will happen during this century" (if you think my model of Eliezer's belief is wrong, please tell me).
https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions
https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty
https://www.lesswrong.com/posts/hwxj4gieR7FWNwYfa/ngo-and-yudkowsky-on-ai-capability-gains-1
https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds
The death of everyone currently on the planet is relatively bounded as a negative (though still ~-2*10^11 QALY), but the prevention of all future human births (via everyone being dead at the same time) is a staggeringly-huge disutility (-10^20 QALY is low-balling it).
https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions
https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty
https://www.lesswrong.com/posts/hwxj4gieR7FWNwYfa/ngo-and-yudkowsky-on-ai-capability-gains-1
https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds
The death of everyone currently on the planet is relatively bounded as a negative (though still ~-2*10^11 QALY), but the prevention of all future human births (via everyone being dead at the same time) is a staggeringly-huge disutility (-10^20 QALY is low-balling it).