Published on:

Hidden Stereotyping—Does Google Racially Discriminate?

Sometimes prejudice is obvious. In the past, segregation, poll taxes, and the entire constellation of Jim Crow tactics were active, overt impediments to the progress and well-being of black Americans.

Laws and customs have changed to better reflect inclusive values, but a truly equal-opportunity society is still a distant dream. Many forms of discrimination remain legal—in most states, for example, it is not against the law to fire someone for being gay or transgender. More insidiously, prejudice can wreak havoc in the subject’s own mind without the intervention of another person.

This effect has been documented in the form of “stereotype threat,” in which reminders—even subconscious reminders—of purported difference can actually create that difference; otherwise, in certain cases, performance is the same. The New York Times summarized one such study:

In a 1995 article in The Journal of Personality and Social Psychology, Professors Steele and Aronson found that black students performed comparably with white students when told that the test they were taking was “a laboratory problem-solving task.” Black students scored much lower, however, when they were instructed that the test was meant to measure their intellectual ability. In effect, the prospect of social evaluation suppressed these students’ intelligence.

The same article explains how stereotypes can negatively affect even those whose stereotypes are “positive”:

[I]n a study published earlier this year in the journal Learning and Individual Differences, high school students did worse on a test of spatial skills when told that males are better at solving spatial problems because of genetic differences between males and females. The girls were anxious about confirming assumptions about their gender, while the boys were anxious about living up to them.

Earlier this month, a new Harvard study illuminated another hidden source of stereotype perpetuation: automatically generated online search advertising. The BBC reported that Googling names associated with black people leads to measurably more crime-related content in the ads accompanying the search results. According to the Guardian, the “research suggests that it’s 25% more likely you’ll get ads for criminal record searches from ‘black-identifying’ names than white-sounding ones.”

Forbes has a disappointing take on the study. Gizmodo asks, “[I]s that Google being racist? Or the people (that would be us) using Google who are racist?” After all, Google’s algorithms respond to user activity—they “adapt ad placement based on mass-user habits,” in the Guardian‘s explanation. But it would be too cynical by half to simply throw up our hands and accept racist ads as “only” a reflection of racist sentiment.

The Guardian draws an analogy between internet architecture and language—both reflect real-world prejudice, but both can be changed for the better:

For centuries people have been attempting to rid language of its “structural racism” by inventing politically neutral dialects. Esperanto, created by the rather wonderfully named LL Zamenhof, has been the most successful of these efforts, designed to transcend nationality and foster peace, love, harmony, all that good stuff. It hasn’t quite got there yet but it has managed to spawn tens of thousands of fluent speakers, as well around a thousand native speakers. It could be said that the technological equivalent of Esperanto is Value Sensitive Design (VSD), a belief that technology should be proactively influenced to take account of human values in the design process, rather than simply reacting to them after afterwards.

The quoted selection seems to regard the idealism behind Esperanto as naive—that is, again, too cynical. Idealism is a great asset in the fight against discrimination.

Do you have questions about workplace discrimination? Contact The Harman Firm today.

Contact Information