Quoted By:
I think that symbolic AI has a theoretical purpose that's being overlooked. The best way to build an AI system is to have an AI build it, right? But the best way of building the AI system that builds AI systems is to have an AI system tell *you* how to build it. Right now, AIs are very bad at being interpretable - even if they could find a sequence of events that would produce a self-improving AGI, they'd have a hard time getting us to follow them. Symbolic outputs of the machine's understanding of facts are going to be an important intermediate step, unless someone happens to stumble upon a neuroscience-based AGI system on their own in the meantime.