>>13859543You actually raise a really good point. What even is AGI?
Usually, the "general" in AGI refers to the ability to learn anything a human can. But I think you can see how incredibly vague that definition is. For instance, not every human is equally smart, so we must probably take the average. But in that case, if an AI is just slightly below average human intelligence, would it suddenly stop being "general"? Because of all those weak definitions (and the lack thereof) surrounding AGI, I often see people spend more time discussing semantics rather than the capabilities of today's AI systems.
Still, everybody agrees that whatever we have today is not AGI. GPT-3 is an impressive system, but there's no way it could function like a human in the real world. Even so, the situation does look promising. Many researchers got BTFO by GPT-3's performance, which was essentially just a very big version of its predecessor. Surprisingly, that thing somehow became more intelligent and started treating questions very differently compared to gpt2.
>what tasks would we apply them to?Anything a human does. Expect a massive spike in unemployment rates when it arrives, though we likely won't need AGI for that to happen.
>Some of my retard friends have given their credit card info to scam chatbotsThe turing test gets passed so easily nowadays that it barely means anything. Still fun seeing humans being fooled by bots. My dad got a call recently from his bank, and it turned out later that he was talking to a very advanced chatbot which sounded just like a real human and answered all of his questions perfectly and naturally.
OpenAI are a bunch of faggots for not open sourcing GPT-3, but I can see how it could be misused