>>13771796This is from a CS perspective, but I think some commonalities can be drawn across disciplines with my concerns about peer review:
1. What is peer-review even "reviewing" if there is a replication crisis in science (yes, this is still pretty relevant to hard sciences)? Criteria that is pretty common for reviewers is to make sure everything is formatted correctly, presentation is good, conclusions follow results, and most importantly, *the result is significant*. Obviously, reviewers themselves aren't verifying the claims of the paper themselves; they will likely just "make sure" (at a high level), that the paper's process "looks reasonable" and call it a day.
If you are proposing something revolutionary, there will be a lot more scrutiny, but if you are raising the performance of a specific ML model in a specific niche from 99.1% to 99.3% under certain conditions, you are technically advancing the state of the art, but the reviewers won't pay as much attention to the methodology as long as it looks reasonable.
This is my main problem with peer review; yes, peer review filters garbage, but the selection process for papers close to the edge essentially boils down to subjective concerns; and of course, things like reproducing other works are definitely "low impact."
2. Corporate papers. Some are actually useful, but most are cheap excuses to make sure that a conference still receives funding. They essentially boil down to "take our word". Yes, the lessons can be valuable since they are most likely not lying, but no, unless their methodology/code is open, they do not belong in a research conference.
3. There is definitely favoritism/bias in the peer review process. People try to claim that double-blind reviews solve this, but this is laughably naive. The highest-level research community is very small and it is usually not hard to know what someone is working on (enough to identify their paper), during a review process even without names/specific identifiers.