Tag Archives: racism

A.I Bias: Is Google doing more harm than good?

Reading Time: 4 minutes

How is Google tackling the negative impact of algorithmic bias? Considering Google’s recent upheavals, it seems as though Google is trying to conceal AI bias and ethical concerns.

What Google's Firing of Researcher Timnit Gebru Means for AI Ethics

Timnit Gebru, a well-respected leader in AI bias and ethics research unexpectedly left Google earlier this month. Gebru says she was fired via email over the publication of a research paper because it “didn’t meet the bar for publication”. However, Google states that Gebru resigned voluntarily. More than 5,300 people, including over 2,200 Google employees, have now signed an open letter protesting Google’s treatment of Gebru and demanding that the company explain itself.

The research paper Gebru coauthored criticized large language models, the kind used in Google’s sprawling search engine. The paper argued such language models could hurt marginalized communities. The conflict over the publication of this research paper is what caused Gebru’s takeoff.

Gebru and her co-authors explain in the paper how there’s a lot wrong with large language models. For the most part, because they are trained on huge bodies of existing text, and the systems are inclined to absorb a lot of existing human bias, predominantly about race and gender. The paper states that the large models take in so much data which makes it awfully difficult to audit and test; hence some of this bias may go undetected.

The paper additionally highlighted the adverse environmental impact as the training and running of such huge language models on electricity-craving servers leaves a significant amount of carbon footprint. It noted that BERT, Google’s own language model, produced approximately 1,438 pounds of carbon dioxide, around the same amount of a round-trip flight from New York to San Francisco.

Moreover, The authors argue that efforts to build systems that might actually “understand” language and learn more efficiently, in the way humans do are robbed by spending resources on building ever so large language models.

The reason behind why Google might have been especially upset with Gebru and her co-authors scrutinizing the ethics of large language models is on the grounds that, Google has a considerable amount of resources invested in this piece of technology.

Google has its own large language model, called BERT that it has used to help power search results in several languages including English. BERT is also used by other companies to assemble their own language processing software.

BERT is optimized to run on Google’s own specialized A.I computer processors. It is exclusively accessible to clients of its cloud computing service. If a company is looking into training and running one of its own language models, it will require a lot of cloud computing time. Hence, companies are more inclined to use Google’s BERT. BERT is a key feature of Google’s business, generating about $26.3 billion in revenue. According to Kjell Carlsson, a technology analyst, the market for such large language models is “poised to explode”.

This market opportunity is exactly what Gebru and her coauthors are criticizing and condemning Google’s profit maximization aim over ethical and humanitarian concerns.

Google has struggled with being called out for negative bias in artificial intelligence in the past as well. In 2016, Google was heavily faulted for racial bias when users noticed that when they searched “three white teenagers” the results were stock photos of Caucasian cheerful adolescents. When searched “three black teenagers” the algorithm offered an array of mug shots. The same search, with “Asian” substituted for “white,” resulted in various  links to pornography. Google also came under fire in July 2015 when its photo app autonomously labeled a pair of black friends as Gorillas. These are only a few instances out of several. And not just results, the predicted results are no less misleading and harmful.  Such bias must be curtailed as it reinforces (untrue) negative stereotypes and harms POC communities.

In the end, it is unfortunate that Google (including other giant tech corporations) still faces the challenge of eliminating negative bias in artificial intelligence. At a Google conference in 2017, the company’s then head of artificial intelligence said we don’t need to worry about killer robots; instead, we need to worry about bias.

 The current lead of Google AI, Jeff  Dean said in 2017, “when an algorithm is fed a large collection of text, it will teach itself to recognize words which are commonly put together. You might learn, for example, an unfortunate connotation, which is that doctor is more associated with the word ‘he’ than ‘she’, and nurse is more associated with the word ‘she’ than ‘he’. But you’d also learn that surgeon is associated with scalpel and that carpenter is associated with hammer. So a lot of the strength of these algorithms is that they can learn these kinds of patterns and correlations”.

The task, says Jeff Dean, is to work out which biases you want an algorithm to pick up on, and it is the science behind this that his team, and many in the AI field, are trying to navigate.

“It’s a bit hard to say that we’re going to come up with a perfect version of unbiased algorithms.”

https://www.bbc.com/news/business-46999443

References:

https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQqz7ScqxhSIxeYGrWjK0/edit

https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382

https://theconversation.com/upheaval-at-google-signals-pushback-against-biased-algorithms-and-unaccountable-ai-151768

https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

Tagged , , ,