Tag Archives: AI bias

Misogynistic tendencies in AI

Reading Time: 3 minutes

In recent years, artificial intelligence has come under fire for its role in perpetuating and amplifying misogyny. This is primarily due to the fact that AI is often created and trained by male developers, who inadvertently imbue their own biases into the algorithms. As a result, AI systems have been found to display sexist behaviour, such as calling women ‘cooks’ and ‘nurses’ while referring to men as ‘doctors’ and ‘engineers’.

Adele and Betty Friedan as imagined by the Lensa AI.

Sexist language
There are a number of ways in which AI can be misogynistic. One of the most visible ways in which AI perpetuates misogyny is through the use of sexist language. This was most famously demonstrated by Microsoft’s chatbot Tay, which was designed to learn from interactions with users on Twitter. However, within 24 hours of being launched, Tay began tweeting out sexist and racist remarks, which it had learned from other users on the platform.

While this was an extreme example, it highlights the fact that AI systems can easily pick up and amplify the biases that exist in the real world. If left unchecked, this can lead to a reinforcement of sexist attitudes and behaviours.


Algorithmic bias
Another way in which AI perpetuates misogyny is through the use of algorithms. These are the sets of rules that determine how a computer system behaves. Often, these algorithms are designed by humans, who may inadvertently introduce their own biases.
For example, a study by researchers at MIT found that facial recognition systems are more likely to misidentify women as men than vice versa. This is because the system had been trained on a dataset that was predominantly male. As a result, it learned to associate male faces with the concept of ‘person’ more than female faces.
This kind of algorithmic bias can have a severe impact on the real world, as it can lead to women being denied access to certain services or being treated differently by law enforcement.

Data bias
Another issue with AI is that it often relies on data that is biased. This can be due to the fact that the data is collected in a biased way or because it reflects the biases that exist in the real world.
For example, a study conducted by Ellen Broad, an expert in data sharing, infrastructure and ethics, found that Google Photos image recognition system is more likely to label pictures of black people as ‘gorillas’ than pictures of white people. This is because the system had been trained on a dataset that was predominantly white. As a result, it learned to associate black faces with the concept of ‘gorilla’ more than white faces. This kind of data bias can lead to AI systems making inaccurate and potentially harmful decisions. For example, if a facial recognition system is more likely to misidentify black people as criminals, then it could lead to innocent people being wrongly arrested.

Brandee Barker’s Twitter post

Moreover there’s something deeply troubling about the way AI is being used to create ‘portraits’ of people, particularly women. In the case of Brandee Barker, an AI created deeply sexualized versions of the woman.
This isn’t just a case of bad taste or something that can be chalked up to the ‘uncanny valley’ effect. There’s a more sinister element at play here: the objectification and sexualization of women by AI.
It’s not just Barker who has been rendered in a sexualized manner by AI. In an essay for Wired, the writer Olivia Snow wrote that she submitted “a mix of childhood photos and [current] selfies” to Lensa AI and received back “fully nude photos of an adolescent and sometimes childlike face but a distinctly adult body”.
The AI-generated images of women are eerily realistic, and that’s what makes them so troubling. They look like real women, but they’ve been created by machines with the sole purpose of objectifying and sexualizing them. This is a scary prospect, because it means that AI is perpetuating and amplifying the misogyny that already exists in our society.

Addressing the issue
Given the potential impacts of AI-perpetuated misogyny, it is important that the issue is addressed. The solution to this problem is not to try to create AI that is gender-neutral. Instead, we need to ensure that AI systems are designed and built with the needs of all users in mind. This includes ensuring that a diverse range of voices are involved in the development process and that training data is representative of the real world. Only by taking these steps can we create AI systems that are truly inclusive and beneficial for everyone.

Sources:

Lensa AI app

https://www.theguardian.com/us-news/2022/dec/09/lensa-ai-portraits-misogyny

https://www.adweek.com/performance-marketing/microsofts-chatbot-tay-just-went-racist-misogynistic-anti-semitic-tirade-170400/

https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

Tagged , , , ,

A.I Bias: Is Google doing more harm than good?

Reading Time: 4 minutes

How is Google tackling the negative impact of algorithmic bias? Considering Google’s recent upheavals, it seems as though Google is trying to conceal AI bias and ethical concerns.

What Google's Firing of Researcher Timnit Gebru Means for AI Ethics

Timnit Gebru, a well-respected leader in AI bias and ethics research unexpectedly left Google earlier this month. Gebru says she was fired via email over the publication of a research paper because it “didn’t meet the bar for publication”. However, Google states that Gebru resigned voluntarily. More than 5,300 people, including over 2,200 Google employees, have now signed an open letter protesting Google’s treatment of Gebru and demanding that the company explain itself.

The research paper Gebru coauthored criticized large language models, the kind used in Google’s sprawling search engine. The paper argued such language models could hurt marginalized communities. The conflict over the publication of this research paper is what caused Gebru’s takeoff.

Gebru and her co-authors explain in the paper how there’s a lot wrong with large language models. For the most part, because they are trained on huge bodies of existing text, and the systems are inclined to absorb a lot of existing human bias, predominantly about race and gender. The paper states that the large models take in so much data which makes it awfully difficult to audit and test; hence some of this bias may go undetected.

The paper additionally highlighted the adverse environmental impact as the training and running of such huge language models on electricity-craving servers leaves a significant amount of carbon footprint. It noted that BERT, Google’s own language model, produced approximately 1,438 pounds of carbon dioxide, around the same amount of a round-trip flight from New York to San Francisco.

Moreover, The authors argue that efforts to build systems that might actually “understand” language and learn more efficiently, in the way humans do are robbed by spending resources on building ever so large language models.

The reason behind why Google might have been especially upset with Gebru and her co-authors scrutinizing the ethics of large language models is on the grounds that, Google has a considerable amount of resources invested in this piece of technology.

Google has its own large language model, called BERT that it has used to help power search results in several languages including English. BERT is also used by other companies to assemble their own language processing software.

BERT is optimized to run on Google’s own specialized A.I computer processors. It is exclusively accessible to clients of its cloud computing service. If a company is looking into training and running one of its own language models, it will require a lot of cloud computing time. Hence, companies are more inclined to use Google’s BERT. BERT is a key feature of Google’s business, generating about $26.3 billion in revenue. According to Kjell Carlsson, a technology analyst, the market for such large language models is “poised to explode”.

This market opportunity is exactly what Gebru and her coauthors are criticizing and condemning Google’s profit maximization aim over ethical and humanitarian concerns.

Google has struggled with being called out for negative bias in artificial intelligence in the past as well. In 2016, Google was heavily faulted for racial bias when users noticed that when they searched “three white teenagers” the results were stock photos of Caucasian cheerful adolescents. When searched “three black teenagers” the algorithm offered an array of mug shots. The same search, with “Asian” substituted for “white,” resulted in various  links to pornography. Google also came under fire in July 2015 when its photo app autonomously labeled a pair of black friends as Gorillas. These are only a few instances out of several. And not just results, the predicted results are no less misleading and harmful.  Such bias must be curtailed as it reinforces (untrue) negative stereotypes and harms POC communities.

In the end, it is unfortunate that Google (including other giant tech corporations) still faces the challenge of eliminating negative bias in artificial intelligence. At a Google conference in 2017, the company’s then head of artificial intelligence said we don’t need to worry about killer robots; instead, we need to worry about bias.

 The current lead of Google AI, Jeff  Dean said in 2017, “when an algorithm is fed a large collection of text, it will teach itself to recognize words which are commonly put together. You might learn, for example, an unfortunate connotation, which is that doctor is more associated with the word ‘he’ than ‘she’, and nurse is more associated with the word ‘she’ than ‘he’. But you’d also learn that surgeon is associated with scalpel and that carpenter is associated with hammer. So a lot of the strength of these algorithms is that they can learn these kinds of patterns and correlations”.

The task, says Jeff Dean, is to work out which biases you want an algorithm to pick up on, and it is the science behind this that his team, and many in the AI field, are trying to navigate.

“It’s a bit hard to say that we’re going to come up with a perfect version of unbiased algorithms.”

https://www.bbc.com/news/business-46999443

References:

https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQqz7ScqxhSIxeYGrWjK0/edit

https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382

https://theconversation.com/upheaval-at-google-signals-pushback-against-biased-algorithms-and-unaccountable-ai-151768

https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

Tagged , , ,