>>13172665Models aren't bad, they are very good. Models should always be made to follow facts. And justification must exist if there is deviation from those facts. There are many useful things I can think of, even from these studies which don't have data. If they are building a model correlating droplet sizes to infection rates, and they are anticipating infection rates drop because masks prevent droplets, but then infection rates don't drop, or not as substantially decline, then that begs the question as to why.
Of course, it is easy to discard a model. Forget about it, some mess, must be wrong. And it is wise to eliminate inconsistencies in the math or thinking of its construction. What if the model isn't the problem? What if covid and other particles have ways of separating from droplets, or if viral load for infection is minimal.
Those two MIT guys built a model that implies a drastically different result than the data CDC collected on mask effectiveness. They considered that masks should be more effective indoors in well-mixed environments. They theorize that the robust medical HVAC systems are not necessary and that merely any ventilation in those rooms could go a long way in reducing risk. The differences between their model and reality is obviously apparent. They have clearly labeled their condition which is different and is no where near as dynamic as the conditions for Covid spread.
Really, what bothers me, their model is incredibly important for masks in hospitals, but it just becomes this titular covidian circlejerk. There should be well-sourced accessible data that they can pull on this shit. Their model isn't even about covid, but it is a convenient way to generate interest. So what, they do have a fault in sourcing mask data as if it is actual data, but it exposes a much larger fault in a system that can't see right in front of its nose.