>>11724673Depends on what you work on and how you are funded. ML decomposes into different topics just like other fields of mathematics.
I'm funded through a data science project, but my research focus is in theory - I do work in online learning & spectral graph theory.
I've done a couple papers on neural networks mainly because working with them is the easiest way to get pubs in high-quality venues. I do find them pretty interesting from a mathematical perspective as well. I mainly work on techniques to encourage robustness (e.g. against adversarial examples). I use Jax for most of my nn work.
Officially I'm in the CSE department, but I collaborate closely with the ece, math & datascience deparments. Despite what a lot of people believe & even though there is a lot of overlap, there are pretty fundamental and significant difference between grad ML and pure/applied stats & optimization. One question CS deals with that stats only does to a lesser degree is learnability - e.g. what does it mean to be learnable? when is something learnable? what can be learnable? Alternatively, a question that is dealt with in stats is what does it mean to estimate something? how do you formalize estimation under different assumptions on the environment?
In general I do a fair amount of math. I'm of the opinion that science should be a slow endeavor & you should really try to understand a problem deeply. One way to convince your peers that you have done this and have something to say is to produce results - usually a theorem - that includes a rigorous argument - usually a proof. Unfortunately this isn't a universal opinion in cs, math, or most other fields.
I have done a couple research internships in industry. Aside from a few top places, doing pure research is hard. You need to produce a deliverable most of the time, and there is less pressure to publish compared to academic. The work may be no less interesting, though.