>>13723900It depends on what you would include in "automated statistics", as you could theoretically describe brain function as "just automated statistics", albeit with a crazy physical machinery.
Lots of progressive in the last 5 years in ML, especially surrounding natural language processing with RNNs/LSTM/GRU, attention mechanisms and transformers, and more recently the interest in few-shot learning and reinforcement learning. Attention mechanisms and few-shot learning are two things which look to me like very basic necessities if we were going to give a set of criteria for what is considered "AI" (for me, ability to focus on things is important, and the ability to generalize using very few examples is something that people can do but regular old ML models can't).
Most models are single-task and lazer focused; for exaple, GPT-3 and such are only focused on natural language generation/translation. We could take a number of single-case models and try to wrap them up into a large multi-task model that uses those model outputs as inputs and has a goal to act like AI or some such.
The problem is we define AI as "acts like human", like seriously. Doing things beyond human capability is considered "not intelligence", but being able to communicate like a human and come up with new ideas like a human would is considered AI. So we would have to have a model that is opaque and generative (like the current generative models which can make "art"), can communicate, and acts like a person. If it communicates but not like a person, we wont "recognize" it as AI. If it doesn't have human-like thoughts but other thoughts, we consider it "not intelligent". Part of considering something AI is if we recognize it as human, or at the very least having some emergent behavior that wouldn't be predicted otherwise, ala a far more complicated Conway's Game of Life.