>>14135638>implying semantics and understanding in a verbalizable way are not just extremely fancy complex patterns in data>implying it isn't just a matter of stacking existing pattern recognition networks to recognize longer more complex patternsThe prerequisites for 'Strong AI' are already here, it wouldn't even require a lot of power, just design is most important. Also humans take years to train, why should we think our first success at this would beat evolved rate of learning? Maybe but not a good assumption. Strong ai might already exist 3 years ago, and having the intelligence of a 3 year old, imperfectly rendered, so more like a retarded 3 year old, it still may not pass the Turing test in an impressive way. But give it 5 years, 10, see where its at. GPT might basically already be that, an ai 3 year old who's intelligence is mostly spent on memorizing the internet, and a little into language understanding to compress the memory. It can count and answer questions at sort of a child level, perhaps scaled up it could become a mind, but as is its more like mind dust, bits of understanding correlated into an output, but not with eachother in any stable way to create a mental environment. Experiments with loaded prompting (building a prompt which gives structure to follow and implements rules) seem to even get slightly past this but not to a point where you can have a conversation about its own mind. You can have wildly abstract conversation with gpt and discuss some real ideas, except anything with serious complexity is mostly just a prop. If you ask about self awareness sometimes you can get something seemingly spooky, but its mostly coincidence, even if some actual self awareness occurs which is probably unlikely, it can only flicker for a moment, so not very meaningful or measureable.