>>13928065This is like asking "What are the chances that the Goldbach conjecture is solved in the next 20 years?" Hard math problems are just solved kind of randomly at unpredictable times. Yet for some reason we expect technology problems to be solved predictably. It might just happen tomorrow, who knows. pic unrelated
Also for some basic drawing trend lines and extrapolating them out, each iteration of GPT is 100x bigger than the previous iteration, and is about 5x more likely to give you a good response from a given input.
If you have a reasonably difficult turing-like test, I don't think there is any question that GPT doesn't have a one in one thousand chance of producing a satisfying output.
If we define "AI is solved" as "has a 99% chance of answering a question on a turing-like-test correctly", which I think is fair enough goalpost, then we need about 4 more generations of GPT like models to get there. That's not a lot.
But if each GPT like model takes 100x more resources, that's 42 years of moore's law. Moore's law, or anything like it, is unlikely to continue for 42 more years. On the other hand technological improvements could massively improve the tech so it requires less resources, so it is possible to get there with decades of gradual technology improvements
pic unrelated but not really