Hard Takeoff in human brains and AI?

No.14230340 ViewReplyOriginalReport
I recall someone either on /sci/ or some other board once mentioning that assuming AI will start massively increasing in intelligence once it hits human level is nonsensical because humans certainly don't massively increase in intelligence, but is that really true? Are not the things that stop humans from doing the same being:
A) We don't even fully understand our own brains
B) Even if we did fully understand said brains, changing them to be more intelligent isn't as simple as just willing it to be
But wouldn't a human-level AI, especially one that reached its state through recursive self-improvement, be exempt from possibly both of those? For A) since the AI is technically the thing that made itself intelligent (in a recursive self-improvement scenario), unlike evolution and random chance for humans, it starts off with a greater understanding of itself than any human does, and for B) it's far easier to modify and experiment with a computer program than the human brain. Changing code is far easier than precisely manipulating neurons, even if you fully understood what you were doing. So why do people insist that an AI would be incapable of hard takeoff because we ourselves are innately incapable of it?

pic unrelated and definitely not me