That's the fucking point, because you are wrong in both accounts. We are barely out of the expert systems paradigm because we are literally unable to program anything else, even if neural networks have made it so that we do not need to make the grunt work in order to make them so. Even if we presume continued development of the neural networks (and there's plenty of space for it), no amount of programming will yield a strong AI through it because that's not what neural networks are capable of doing. Everything else, which could yield a strong AI by allowing to expand beyond its rulesets? Requires two to three orders of magnitude greater that a building size super computers, and even graphite is at its transistor edge when you get at that point.
Which is the second front: we have no known method of general AI which can actually work within current infrastructure, and we know all current and upcoming materials aren't up to the task of significantly reducing the number of processing units into the real of the practical or even the useful.
You completely misunderstand the issue at hand and believe that my criticism of the power argument is just that we haven't used the resources well enough. That's false. The argument is that by any known methods, we don't have the power to do it, but also we don't have the tools to do it. Watson is a perfect example of all of this, because Watson's programming tries to circumvent the limitations of neural networks by being, at its core, a parsing engine which then vomits it's database on you. If you ask Watson what's every country with less than ten rivers, Watson can parse that easily just like Siri and the rest can and provide a limited sample of what it's database provides, but no inferences from it. That's the current state of AI, and for the foreseeable future, that's likely as far as it will go, because of the limitations of the very models we can implement without constructing whole towers to process it.