Would you outsource data analysis to a bird?

Pigeons can be quickly trained to detect cancerous masses on x-rays. Just like computer algorithms.

But despite the potential efficiency of outsourcing the task to birds or computers, that’s no excuse to get rid of human radiologists, says Ramón Alvarado, a philosopher and data ethicist at the University of ‘Oregon.

Alvarado studies how humans interact with technology. It is particularly susceptible to the harms that can result from an overreliance on algorithms and machine learning. As automation increasingly creeps into people’s daily lives, there is a risk that computers will devalue human knowledge.

“They’re opaque, but we think because they do math, they’re better than other connoisseurs,” Alvarado says. “The assumption is that the model knows best, and who are you to tell the math they’re wrong?”

It’s no secret that human-built algorithms often perpetuate the same biases that influenced them. A facial recognition app trained primarily on white faces won’t be as accurate on a diverse set of people. Or a resume ranking tool that gives greater preference to people with Ivy League training might ignore talented people with more unique but less quantifiable backgrounds. But Alvarado is interested in a more nuanced question: what if all goes well and an algorithm is actually better than a human at a task? Even in these situations, damage can still occur, argues Alvarado in a recent article. This is called “epistemic injustice”.

The term was coined by feminist philosopher Miranda Fricker in the 2000s. It has been used to describe benevolent sexism, such as men offering women help at the hardware store (a nice gesture) because they presume them less competent (a negative motivation). Alvarado extended Fricker’s framework and applied it to data science.

He points to the inscrutable nature of most modern technology: an algorithm can get the right answer, but we don’t know how; which makes it difficult to question the results. Even the scientists who design today’s increasingly sophisticated machine learning algorithms usually can’t explain how they work or what the tool uses to make a decision.

An oft-cited study found that a machine learning algorithm that correctly distinguished wolves from huskies in photos didn’t look at the canines themselves, but instead focused on the presence or absence of snow on their backs. -map of the photo. And since a computer, or a pigeon, cannot explain its thought process like a human can, letting them take over devalues ​​our own knowledge.

Today, the same type of algorithm can be used to decide whether or not someone deserves an organ transplant, a line of credit or a mortgage.

The devaluation of knowledge resulting from reliance on such technology can have far-reaching negative consequences. Alvarado cites a high-stakes example: the case of Glenn Rodriguez, a prisoner who was denied parole based on an algorithm that quantified his risk upon release. Despite prison records indicating he had been a consistent pattern for rehabilitation, the algorithm ruled differently.

This has produced multiple injustices, argues Alvarado. The first is the algorithm-based decision, which penalized a man who, by all other criteria, had been granted parole. But the second, more subtle injustice is the impenetrable nature of the algorithm itself.

“Opaque technologies harm the decision-makers themselves, as well as the subjects of decision-making processes, by lowering their knowledgeable status,” says Alvarado. “It’s an affront to your dignity because what we know, and what others think we know, is an essential part of how we navigate or are allowed to navigate the world.”

Neither Rodriguez, his lawyers, nor even the parole board could access the variables that went into the algorithm that determined his fate, in order to understand what was biasing it and challenge its decision. Their own knowledge of the Rodriquez character was overshadowed by an opaque computer program, and their understanding of the computer program was blocked by the company that designed the tool. This lack of access is an epistemic injustice.

“In a world with increased automation of decision-making, the risks are not just of being harmed by an algorithm, but also of being left behind as creators and challengers of knowledge,” says Alvarado. “As we sit back and enjoy the convenience of these automated systems, we often overlook this key aspect of our human experience.”

Comments are closed.