>>13282045Counterpoint: actual progress is driven by research that is neither reluctant to tinker nor afraid to build theory. The excellent researcher understands when they have to play around with what they've built because they don't understand it despite the simplicity of the model. They also understand that you only go so far until you hit problems where you have to refocus and refine the theory.
Deep learning, however, is particularly focused on experiments and tinkering. That isn't bad at all. In fact that's incredibly important for the aforementioned reasons. But this doesn't mean there haven't been big advancements in the theory, that theorists are holding people back, or that theory is "behind the times, always." I mean, here are two of the papers on the matter:
"Recent advances in deep learning theory":
https://arxiv.org/abs/2012.10931"Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges":
https://arxiv.org/abs/2104.13478I want to agree with your point about theorists wasting time in deep learning especially, but I can't bring it in me to make fun of theorists because it reminds me of the problem Rahimi brought up at NIPS 2017, in his talk about how machine learning is a lot like alchemy. There are simple problems that occur in popular tools that even seasoned experimentalists struggle to explain in a way that doesn't just defer to "well, those params + this model don't work."