Facial recognition systems are based on machine-learning computer vision algorithms, and are being used to interrogate and detain people — even though the systems only have 2-3% accuracy, according to the government itself. How algorithmic bias leads to unethical situationsWe think of these algorithms as highly impartial and well thought out systems, but that’s actually very misleading. The machine learning systems are not omniscient intelligences, which is how technologists often present them, but a better metaphor is that these systems are like babies that start off not knowing much. An artificially intelligent system doesn’t feel the same restraint—and because we don’t understand how these systems work, the biases seem like rational, reasonable choices. Systems are not unethical, even if they may be biased; the situations that get created are.
Source: Huffington Post January 20, 2020 11:00 UTC