ALGORITHMIC BIAS, describes systematic and repeatable errors in computer systems or software that create unfair outputs. This can refer to privileging one category over another in ways that deviate from the intended function of the algorithm, providing the user with results that reflect unintended or unanticipated flaws in methods of data collection, coding, or algorithmic training. Across social-media platforms and search-engine results, algorithmic biases have broad implications, ranging from unintentional privacy violations to reinforcing discriminatory social biases based on race, gender, sexuality, class, ethnicity, ability, and age.
As the reliance on technological infrastructure to govern daily life grows, so too does technology’s ability to manipulate reality. The widely accepted understanding of algorithms as neutral or unbiased research tools causes further problems, as it inaccurately imputes to them an authoritative omniscience. This psychological phenomenon is partially a symptom of automation bias, whereby humans favor information provided to them by automated decision-making systems even if they are shown correct nonautomated evidence to the contrary.
Concerns about the role of algorithmic bias in influencing elections and spreading misinformation and hate speech online are well-founded and have been cited in numerous court cases. Problems are also widespread in the realm of criminal justice, where facial-recognition technology is often used to identify suspects but has proven unable to properly recognize darker-skinned faces, leading to many wrongful arrests and convictions of Black men.
These themes are explored through an intersectional Black-feminist lens in Safiya Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism, a comprehensive review of Google search algorithms and the discriminatory biases embedded within them. Noble argues that Google and other prominent search engines, libraries, and information sources encourage results that endorse Whiteness, heteronormativity, and patriarchy as the standard and that view alternatives as problematic. Such biases are built into the algorithms because the people who create them are similarly biased. She cites as examples Google’s autocomplete feature, which suggested porn websites when “black girls” was typed in and privileged antisemitic suggestions in response to the word “Jew,” for example, as well as the Library of Congress’s employment of the term “illegal aliens” as opposed to “noncitizens” or “unauthorized immigrants.” Algorithms of Oppression challenges the utopian idea that the Internet can eventually be a fully democratic and/or postracial environment. Noble finds these algorithms especially insidious given that they operate in the guise of “neutral” infrastructure while in fact serving to reflect, protect, and perpetuate racism, patriarchy, homophobia, classism, and ableism.