Is AGI just a compute problem?

No.13395789 ViewReplyOriginalReport
AGI has always been viewed as an intellectually hard problem that would require an utter super genius to solve it with some breakthrough theory, but looking at the last 10 or so years of AI research, the evidence seems to be mounting that the best method for producing more general models that have some semblance of common sense, logical reasoning, and all those other good stuff that is important for AGI seems to be just throwing more compute and data at neural networks. Which is essentially the same simple approach that rosenblatt had in the 60s when he was doing work on the perceptron, except the computers back than were a joke, i.e NNs with like 10 neurons can't learn much of anything.

In turn, was the field wrong all these years? AGI isn't a hard problem, it's actually an easy problem, and we actually had the solutions/right direction decades ago with connectionism, we just didn't have the compute until very recently. And despite decades of ppl looking for something special about human intelligence&neurons, nothing special has been found. it's just more dense neurons and brain regions scaled up allometrically. Birds exhibit similar functionality despite lacking 'crucial' neuroanatomy, early human vs gorilla differentiation, and primate brain scaling.

TLDR; AGI is a easy problem, it just looked like a hard one because we lacked the necessary compute for most of AI history