Just had a thought….
No.13791381 ViewReplyOriginalReport
Quoted By: >>13791471 >>13791543 >>13791601 >>13792480 >>13792681
Let’s say machine learning is really just a really fancy steroid version of linear regression, built on top on each other (ignore the details for this post, don’t try to be a “muhh actually more” faggot now!). Anyways.
Do you think that future “research” is going to use machine learning for their papers, even tho a linear approximation will just do the trick for really simple shit, just to look more “legit” in media (“muh midwit I love A.I so smart computer knows everything” meme). I have started to wonder that if you use “machine learning”, you have more freedom to tweak stuff to match you data (train it wrong, put “convenient” weights to nodes, pick and choosing model functions etc), to make your papers more “correct”, but if you just would have checked it with a simple formula it would look more incoherent.
I have a background in physics, and my zoomer head thinks that just doing simple approximations for basic stuff is good enough, and if it’s simple to follow by just looking at the setup, the less room there is for bullshit, but looking at a node network and a bunch of random vectors, that has A LOT of room to hide the bullshit in a model.
Last thought: If you had any “machine learning” involved in your research, you can always blame it in public that “it” was wrong. You can use this shit as an excuse if it all backfires. (social science and psychology has been doing this excuse for a while now if a paper gets busted)
TL:DR Is machine learning being pushed more in every field just to have a scapegoat if your papers are wrong and to have more freedom to tweak your results and conclusions? Just adding more to the non recreational science shitshow.
> inb4 you hate machine learning and you are retarded.
No, I just think that it seems to be pushed into areas where simple shit does the job well, and yes I’m retarded.
Do you think that future “research” is going to use machine learning for their papers, even tho a linear approximation will just do the trick for really simple shit, just to look more “legit” in media (“muh midwit I love A.I so smart computer knows everything” meme). I have started to wonder that if you use “machine learning”, you have more freedom to tweak stuff to match you data (train it wrong, put “convenient” weights to nodes, pick and choosing model functions etc), to make your papers more “correct”, but if you just would have checked it with a simple formula it would look more incoherent.
I have a background in physics, and my zoomer head thinks that just doing simple approximations for basic stuff is good enough, and if it’s simple to follow by just looking at the setup, the less room there is for bullshit, but looking at a node network and a bunch of random vectors, that has A LOT of room to hide the bullshit in a model.
Last thought: If you had any “machine learning” involved in your research, you can always blame it in public that “it” was wrong. You can use this shit as an excuse if it all backfires. (social science and psychology has been doing this excuse for a while now if a paper gets busted)
TL:DR Is machine learning being pushed more in every field just to have a scapegoat if your papers are wrong and to have more freedom to tweak your results and conclusions? Just adding more to the non recreational science shitshow.
> inb4 you hate machine learning and you are retarded.
No, I just think that it seems to be pushed into areas where simple shit does the job well, and yes I’m retarded.