Digital Redlining of Abortion Access and Women on Web

Women and pregnant persons need abortion during the pandemic, yet digital redlining and algorithmic oppression of abortion access and Women on Web continue to restrict access to safe abortion.

Blog by Erin Hassard*

 

After the results of the most recent Google Core Update, Women on Web (WoW), annually providing over 60,000 women worldwide with the means to safely self-induce an abortion they would otherwise not have access to, saw a massive decline in the website’s traffic. Given our global pandemic predicament, this was both puzzling and alarming as this is a time when even women and pregnant persons with legal access would be limited in their options. In other words, a time when women  and pregnant persons need Women on Web’s service the most. This has led the organization to take a deeper look into the measures and guidelines laid out for how algorithmic updates such as these get assessed.

Aside from the generic webmaster guidelines and recommendations, Google provides little insight into reasons for the update’s results. Thus, the work has been put on non-profit workers’ shoulders to try and make sense of such a dramatic loss for a service that’s been operating for over 15 years. Women on Web has since taken the recommended steps to optimize the performance of their website by reviewing its search engine optimization (SEO) strategies and screening pages for violations of Expertise, Authoritativeness, and Trustworthiness (E-A-T), a critical ranking factor for Google.

And while the subsequent response to the site’s amendments has been positive thus far, it will at best be months before the damage of this impact can hopefully be restored. In the meantime, women and pregnant persons, at a particularly desperate time, may not get access to the service. The update has also meant time and labour on behalf of medical staff and non-profit workers whose resources are already stretched thin. One would think the backtracking that services such as WoW are now responsible for should be, surely, something a powerhouse like Google would take into consideration and prevent from happening; perhaps even shouldering some of the responsibility. This, however, is obviously not the case.

Are search algorithms really neutral? Or, are they rather biased?

 

How does this happen in the first place? What are the factors that go into search algorithms that would yield such disastrous outcomes for such vital services? Are they as neutral and well-intentioned as the straightforward digital maxim, E-A-T, would seem? Or, is their influence more insidious?      

Beginning in the 1930’s, an exercise of racial discrimination started to become common practice within the largest financial institutions of the United States. Spearheaded by the Home Owners’ Loan Corporation (HOLC), the country’s biggest financiers of individual and family loans to invest in housing began literally mapping out in red, marginalized and Black communities based on discrimination. The aftermath of this resulted in the creation of risk evaluation standards on the federal level that led to decades of systemic oppression and segregation. So was born the term “redlining” to symbolize the active blackballing by those in power of racialized and marginalized communities. As a side note, after some consideration, I’m using the term “blackballing” – originally referring to the black balls used to denote a negative ballot – purposefully here; a symbol of how unconsciously embedded the codes of othering and adversity are, even in our own language, let alone an artificial one.    

The evolution of the term, “redlining”, now includes the practice at the hands of any institution that discriminates or refuses to provide resources or support to certain groups, of which many still tend to be members of disadvantaged communities including Black, Indigenous, and people of colour (BIPOC). This is being realized more readily in this day and age in the technology world, where massive amounts of data get processed as a lump sum and thus yield discriminatory outcomes. What’s becoming more obvious is that the way in which “big data” gets evaluated is ultimately filtered through the same biased lens that considers certain groups ahead of others, just as redlining did; a practice now referred to as “digital redlining”. This includes search engine testing, censorship, and correlating simplistic or stereotypical conclusions to factor into the creation of algorithms.  

Digital redlining means the decisions about how information of groups and services gets disseminated and who can access it, hinges on a small, powerful group of gatekeepers. Despite Google’s assertion algorithmic updates are based on “automated systems” and human raters do not screen individual content, human-made codes come with human-made bias. It came as no surprise then that in looking at the make-up of gatekeeper positions (meaning specifically technical positions) at Google, the gender and ethnicity breakdown as of 2017 was 83% men, and 60% Caucasian, with the next highest group being 34% Asian. Only 1% were made up of Black employees, 2% were Latino, and 3% were bi- or multi-racial. Numbers like these are often backed up with the defence of meritocracy; that the distribution is due to the pools of graduates available to draw from in computer science and engineering programs. This defence is finally being debunked, as systemic racism and sexism is being made clearer to the mainstream. The recent outcry for systemic change by BIPOC groups in North America has exposed how deeply rooted systems of oppression are, and how present they are in 21st century institutions; this includes the tech world. These systems pose a serious problem for communities and services that deal with vulnerable groups, such as Women on Web, who are beholden to the social understandings of the most dominant and privileged group.

 

Digital Redlining and Abortion Access

 

So, how do abortion and services such as Women on Web fit into the parameters of digital redlining? The relationship abortion restriction – and the on-going controversy that exists in areas where there is no restriction – has with aspects of digital redlining lies in continued systemic sexism and society’s entitlement to women’s bodies. Sexism and misogyny, like racism, are intersectional and feed into people’s lives in a myriad of ways. It’s the intersectional component that is the oppression here; privileged Caucasian women are more likely to be able to access a safe abortion if they need one, regardless of where they are. While Google has demonstrated in the past that it is willing to confront issues of discrimination within its technical sector, it remains clear that not enough is being done to address how crucial social awareness is on the part of those responsible for how we globally seek out information.

An example of this is a Google engineering employee named James Damore. In 2017, Damore was fired for issuing an internal memo illuminating his thoughts on the corrosiveness of what he believed to be Google’s “left-leaning, politically correct” culture. In the memo, Damore outlined that he was aware of existing biases in tech programming and “strongly believed in racial and gender diversity”. At the heart of it, however, Damore exposed himself as a proponent of sexist and racist ideologies: in the memo, he justified the gender imbalance within the tech sphere as simply due to women’s biological interest in “people rather than things”, whereas men were generally the opposite. He quoted research and theories that reinforce biological reasons for the wage gap, indicated he didn’t even consider genders that were non-binary, and made no mention of reasons for ethnic diversity, arguing that general “personality differences” were just cause to not actively seek out more diverse candidates.

Certainly, in an effort to not categorize all of his colleagues as being as socially uninformed as Damore was, this memo was based on one man’s misguided opinion and he ultimately suffered consequences for it. But what’s undeniable is he felt he was on solid ground to express this to his colleagues. He displayed no insight whatsoever while asserting “women are more interested in people than in things, relative to men”, a statement which laid out an intrinsic problem: in order to appropriately and objectively facilitate the “things” we all use, you need to have empathy for the divisions people experience so as not to cause more. Gatekeepers need to be aware of the social paradigms affected by the “things” they manufacture and who has access to them.  By demonstrating a complete disregard for the societal nuances affected by his work, Damore let slip that he felt that he – and by extension, Google – did not have to consider what he felt was “women’s work” while coding systems responsible for so-called neutral accessibility.

 

Algorithmic Oppression and Tutelage on Abortion Access

 

In Algorithms of Oppression: How Search Engines Reinforce Racism, Safiya Umoja Noble coined the term “algorithmic oppression” to refer to these specific instances of failure on the part of the global tech giants. When those in power continually fail to consider the bearing they have on other lives, that weight continues to impede on said lives. For example, Noble discusses plugging in the key search words “Black girl” to see what would be associated with her own identity. She was then bombarded with pornography; some insight into whose interests are taken into consideration when creating the algorithm (hint: not Noble’s). In an attempt to recreate the same experiment, I plugged the keyword, “abortion” into Google’s search engine. The first few websites to surface were dictionary definitions, then medical definitions; fairly standard for a single word search. I then scrolled to the bottom of the page to see the top suggestions for combination keywords. I selected the suggestion “causes of abortion” as that seemed to be both the most logical and yet the most mystifying. Lo and behold, the first hits yielded ads for anti-choice organizations: does abortion kill a person or just extra tissue? This was the first question to pop up under the first hit for someone searching the “causes of abortion”. Upon plugging in “online abortion”, keywords directly associated with Women on Web, the first hits were ads for sites that seemed relevant but were in fact sites citing false and fear-inducing information about abortion that dissuaded women and pregnant persons. While I’m sure the websites all scored high by SEO and E-A-T standards, the information they were peddling was false.  Google reports not interfering or “screening” web content, but perhaps this fly in their neutral ointment could persuade them to be bit more responsible.    

Now, this is not meant to spark up this debate, nor is it meant to downplay its validity. But when this is the first thing to come up as information, even as an ad, not only does it reinforce the stigma against women who need an abortion – yes; I’m using the term, need – but it literally places the relevance of the medical information to follow, below it; less than it. This is where the element of digital redlining lies: making the beliefs of some more important than the medical needs of others. The power corporate tech companies hold in their hands is the ability to discern which sets of data get disseminated, how, and by whom. Women on Web not only assists with abortion access and contraceptive resources to women in need but also provides information that advocates for abortion as integral to women’s health. The site offers information on pregnancy, miscarriage, sexual health and education, and debunks the numerous myths surrounding abortion. In 2019 alone, Women on Web engaged with nearly 200,000 women and pregnant persons from over 200 countries in over 24 different languages, offering consultations, all kinds of information about the abortion pill (mifepristone and misoprostol), advice and guidance on how to deal with pre-existing conditions, and how to handle themselves with authorities if they needed to seek medical attention, to name a few. Two countries with some of WoW’s highest number of service users include Brazil and South Korea, both of whom have been severely struck at different times by the COVID-19 pandemic. Not having a conscientious approach when considering algorithmic data means unforeseeable disasters such as the pandemic are made far worse for vulnerable groups who are now reliant on internet access

What finally needs to happen is gatekeepers need to hold themselves and others accountable. It’s no longer enough to make examples of workers such as James Damore by firing them then issuing a statement; tech companies such as Google need to actively seek to diversify their workers and train their staff to be conscious of how impactful their potential ignorance to systemic forces really is. Otherwise, we get instances of digital redlining and algorithmic oppression, practices that, as we’ve seen in the past, could bear generations upon generations of additional struggle by those impacted. If technological gatekeepers are going insist on their impartial objectivity, training on intersectional feminism, anti-racism and diversity might be a good place to start…

     

Erin Hassard:

Erin is a linguistics graduate of Concordia University and lives in Montreal. She’s a freelance writer/editor and social justice advocate who’s done community work in language discrimination and gender-based violence.