Tag Archives: #genderbias

Misogynistic tendencies in AI

Reading Time: 3 minutes

In recent years, artificial intelligence has come under fire for its role in perpetuating and amplifying misogyny. This is primarily due to the fact that AI is often created and trained by male developers, who inadvertently imbue their own biases into the algorithms. As a result, AI systems have been found to display sexist behaviour, such as calling women ‘cooks’ and ‘nurses’ while referring to men as ‘doctors’ and ‘engineers’.

Adele and Betty Friedan as imagined by the Lensa AI.

Sexist language
There are a number of ways in which AI can be misogynistic. One of the most visible ways in which AI perpetuates misogyny is through the use of sexist language. This was most famously demonstrated by Microsoft’s chatbot Tay, which was designed to learn from interactions with users on Twitter. However, within 24 hours of being launched, Tay began tweeting out sexist and racist remarks, which it had learned from other users on the platform.

While this was an extreme example, it highlights the fact that AI systems can easily pick up and amplify the biases that exist in the real world. If left unchecked, this can lead to a reinforcement of sexist attitudes and behaviours.


Algorithmic bias
Another way in which AI perpetuates misogyny is through the use of algorithms. These are the sets of rules that determine how a computer system behaves. Often, these algorithms are designed by humans, who may inadvertently introduce their own biases.
For example, a study by researchers at MIT found that facial recognition systems are more likely to misidentify women as men than vice versa. This is because the system had been trained on a dataset that was predominantly male. As a result, it learned to associate male faces with the concept of ‘person’ more than female faces.
This kind of algorithmic bias can have a severe impact on the real world, as it can lead to women being denied access to certain services or being treated differently by law enforcement.

Data bias
Another issue with AI is that it often relies on data that is biased. This can be due to the fact that the data is collected in a biased way or because it reflects the biases that exist in the real world.
For example, a study conducted by Ellen Broad, an expert in data sharing, infrastructure and ethics, found that Google Photos image recognition system is more likely to label pictures of black people as ‘gorillas’ than pictures of white people. This is because the system had been trained on a dataset that was predominantly white. As a result, it learned to associate black faces with the concept of ‘gorilla’ more than white faces. This kind of data bias can lead to AI systems making inaccurate and potentially harmful decisions. For example, if a facial recognition system is more likely to misidentify black people as criminals, then it could lead to innocent people being wrongly arrested.

Brandee Barker’s Twitter post

Moreover there’s something deeply troubling about the way AI is being used to create ‘portraits’ of people, particularly women. In the case of Brandee Barker, an AI created deeply sexualized versions of the woman.
This isn’t just a case of bad taste or something that can be chalked up to the ‘uncanny valley’ effect. There’s a more sinister element at play here: the objectification and sexualization of women by AI.
It’s not just Barker who has been rendered in a sexualized manner by AI. In an essay for Wired, the writer Olivia Snow wrote that she submitted “a mix of childhood photos and [current] selfies” to Lensa AI and received back “fully nude photos of an adolescent and sometimes childlike face but a distinctly adult body”.
The AI-generated images of women are eerily realistic, and that’s what makes them so troubling. They look like real women, but they’ve been created by machines with the sole purpose of objectifying and sexualizing them. This is a scary prospect, because it means that AI is perpetuating and amplifying the misogyny that already exists in our society.

Addressing the issue
Given the potential impacts of AI-perpetuated misogyny, it is important that the issue is addressed. The solution to this problem is not to try to create AI that is gender-neutral. Instead, we need to ensure that AI systems are designed and built with the needs of all users in mind. This includes ensuring that a diverse range of voices are involved in the development process and that training data is representative of the real world. Only by taking these steps can we create AI systems that are truly inclusive and beneficial for everyone.

Sources:

Lensa AI app

https://www.theguardian.com/us-news/2022/dec/09/lensa-ai-portraits-misogyny

https://www.adweek.com/performance-marketing/microsofts-chatbot-tay-just-went-racist-misogynistic-anti-semitic-tirade-170400/

https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

Tagged , , , ,

AI follows human gender biases

Reading Time: 3 minutes

Although a lot has changed in a recent years when it comes to empowering women in pursuing professional careers, there is still plenty of space for much needed improvements. Researchers form University of Melbourne, in their study “Ethical Implications of AI Bias as a Result of Workforce Gender Imbalance” commissioned by UniBank, investigated the problem of AI favouring male job applications over female ones.

Researchers observed hiring patterns for three specific roles chosen based on gender ratios:

  • Male-dominated – data analyst
  • Gender-balanced – finance officer
  • Female-dominated – recruitment officer.

Half of hiring panellists were given original CVs with genders of candidates showed, and the other half of panellists the exact same CVs but with genders changed (male to female and female to male, for example “Julia” was changed to “Peter” and “Mark” to “Mia”). The recruiters were asked to rank CVs, where 1 was being the best ranked candidate. Finally, the researchers created hiring algorithm based on the panellists decisions. 

The results showed that CVs of women were scored up to 4 places lower than men’s ones, even thought they had the same skills. The recruiters claimed that they were judging based on experience and education.

Male candidates were more often ranked in Top 3 for all jobs listed and female candidates were more often ranked in Bottom 3 for all jobs!

Male candidates were on average ranked higher for data analyst and finance officer position by both female and male recruiters! That proves that the bias was unconscious. However, in female-dominated role – recruitment officer, the bias worked also the other way round meaning female CVs on average ranked slightly better than male ones.

The researchers adopted a regression model of analysis which showed that candidate’s gender was one of the most critical factors in deciding who will get the job. 

Researchers warn that human bias can be adopted by AI on a bigger scale. Mike Lanzing, UniBank’s General Manager, points out that “As the use of artificial intelligence becomes more common, it’s important that we understand how our existing biases are feeding into supposedly impartial models”.

Dr Marc Cheong, report co-author and digital ethics researcher from the Centre for AI and Digital Ethics (CAIDE),  said that “Even when the names of the candidates were removed, AI assessed resumés based on historic hiring patterns where preferences leaned towards male candidates. For example, giving advantage to candidates with years of continuous service would automatically disadvantage women who’ve taken time off work for caring responsibilities”.

This study calls for immediate action to prevent AI from acquiring gender biases from people, which can be hard to eradicate later on, especially taking into account the constantly increasing use of AI in recruitment processes. The report suggests a number of measures that can be taken to reduce the bias, for example training programs for HR professionals. It is crucial to find the biases that are in our society before AI mimics them.

Sources

https://www.dropbox.com/s/tahw9ad39bjirfi/NEW%20RESEARCH%20REPORT%20Ethical%20Implications%20of%20AI%20Bias%20as%20a%20Result%20of%20Workforce%20Gender%20Imbalance%20%28UniMelb%2C%20UniBank%29.pdf?dl=0

https://about.unimelb.edu.au/newsroom/news/2020/december/entry-barriers-for-women-are-amplified-by-ai-in-recruitment-algorithms,-study-finds

Tagged , ,