The fallacy of the omnipotent algorithm and other misconceptions we have about these systems


SEAN GLADWELL (Getty Images)

When Sun Tzu emphasized in The Art of War the need to know your enemy well, there were no computational algorithms to fear. Two millennia later, they have been seen ascribing greater chances of recidivism to detainees from minority groups, firing 150 people in a second and even participating in armed conflict. And we don’t know them. A study by researchers at the University of Amsterdam on a sample of 2,106 people found that more than half of those surveyed accept that algorithms are independent of human activity, have no bias, have the same level of critical reasoning and intelligence as humans. , and they will replace us. A significant 43% think that these systems can solve “all the problems of society”.

More information

“We wanted to know if people have a fair idea of ​​what algorithms are and what they do, because they come across them every day: on social networks, on their phones, when they watch TV … “, explains Brahim Zarouali, researcher specializing in the study of communications and persuasive technologies. What they didn’t expect was to find such a level of ignorance. “It seemed really alarming to us,” he says. In addition, the phenomenon is more pronounced in certain demographic groups, with these misconceptions being more prevalent among the elderly, the less educated and women.

The research focuses on the algorithms that intervene in information consumption platforms that can personalize and tailor the information that is shown to each person, but the teacher does not rule out that the same confusions they identified in this case be extended to other applications of the same systems.

What is the cost of these gaps in terms of knowledge of the systems increasingly present in our lives? “Our argument is that they could increase the digital divides in our society. It is very important that we all have the same skills and knowledge to benefit from technology and algorithms, ”argues Zarouali.

Eyes that do not see

Zarouali and his team locate the root of the problem in the intangible nature of these systems, which operate in the background, without anyone seeing their ins and outs and, in many cases, as black boxes whose decisions cannot be made. be explained. “This makes it difficult for the general population to get a correct idea of ​​what algorithms can do and how they work,” summarizes the researcher.

What should we know about them? The study draws on a few basic ideas. If we look at Tarleton Gillespie’s definition, algorithms can be described as coded procedures to transform large amounts of input data into the desired result through specific calculations. A condensed and more intentional version of this description is in the words of Cathy O’Neall, “Algorithms are opinions locked in mathematics.”

It is also important for researchers to consider the context in which these systems often operate, which have become essential parts of the competitive advantage of many technology companies. “This explains why many companies are reluctant to expose their algorithmic codes to the outside world,” they point out. “Algorithms can not only show the biases of those who designed and operated them, but also the values ​​and preferences of the companies that deliver them.” As for its ability to match our intelligence, replace us, or solve any problem, the reality is that its abilities are, at least for now, limited to performing specific tasks very effectively.

Algorithmic literacy, explains Zarouali, is essential so that we can take an active role in scrutinizing these systems and resisting the judgment of those who give us problem or benefiting from the services of those we consider aligned with our interests. Unsurprisingly, despite the alarming damage they can cause in their role as enemies, these tools can also help us predict stroke two years in advance, recover works of art thought to be lost, or minimize the risks of contagion of covid-19. “It is important to have critical digital citizenship in all walks of life,” explains the teacher.

Researchers insist that the persistence of these misconceptions can show their effects in two ways: on the one hand, overzealousness can cause us to reject them unjustifiably on the basis of a dystopian view of the future. ; on the other hand, the disproportionate trust placed in them can contribute to the reinforcement of stereotypes and inequalities, and to the dissemination of manipulated content such as fake hyperrealistic videos (deepfakes). “The main solution is digital education. In these initiatives, it is important that people learn what algorithms are and that they are offered protection strategies to deal with their negative consequences. Likewise, they should be empowered so that they can also benefit ”.

You can follow EL PAÍS TECNOLOGÍA on Facebook and Twitter or sign up here to receive our weekly newsletter.

Leave A Reply

Your email address will not be published.