>>13562831>2. You train a model to, given a set of scans, output the correct diagnoses and treatments.That's my point, the humans should be the ones ratifying diagnoses and authorizing treatments, not the AI. You use it for information, not to make decisions on its own.
>3. Because of human biases, all black patients in the sample are for example underdosed on pain medication, or not diagnosed and dismissed more often.Again, regarding the pain medication and dismissals, that should be at the discretion of a human doctor, not AI.
Regarding misdiagnoses, that would be bad, but from what I read the author didn't say the AI misdiagnosed anything.
If the AI can accurately diagnose medical problems, that should only be beneficial to doctors, right?
>As a nonmedical example, Amazon had to scrap one of their resume scanners because it was starting to detect gender from college club affiliations and writing patterns and throwing female ones out.Again, the problem appears to revolve the action, not the detection.
I could be misunderstanding, but it feels like there's only a problem with AI when the humans delegate the decision-making process to the AI.