>>13752541>If you find a different trend in measurements it's fairly easy to figure out which data needs adjusted.No, it isn't. Entire fields of study routinely make the same mistakes during experimentation and measurement, and these mistakes can go hundreds of years without detection, much less resolution. Occasionally, somebody intentionally does it correctly, but there's several barriers.
They may not even have done it fully consciously, but based on expert's instinct/imagination (which gets better results, the stronger the expert). But even if they do it consciously, they may not be fully aware that it's substantially different than what they're colleagues are doing, so they attribute the difference in their data to random noise, or perhaps to a mistake they didn't even make. But even if they realize they did something differently and raise hell about it, if it goes against the grain too much, it doesn't matter if they're right. They'll be ignored.
This happens in every field, even mathematics for fuck's sake. Climate science is no different in this regard. Where climate science IS different than a lot of fields, is that true sciences don't retroactively "bend" data from across studies and sources that they don't like, into something they do. This is the kind of shit we see in soft "sciences" like psychology, sociology, and gender studies. Meta-studies and comparative data is nice and all for informational purposes, but to just go and adjust data without having BEEN in the ship to check whether that particular class actually heats up the water, or how much it cools off when wind blows over it, and verified this experimentally, is dishonest. You can't just say, "oh they did it wrong" and "correct" the number to whatever you want without experimentally proving your new number is actually useful. Doing anything else is confirmation bias by another name.
As long as your field has confirmation bias, you can't identify and eliminate methodological issues.