Tag Archives: AI ethics

AI Influencers market

Reading Time: 3 minutes
Obraz znaleziony dla: ai influencer

In the ever-evolving landscape of social media and marketing, a new phenomenon has emerged: virtual influencers. These AI-generated personas, such as Aitana Lopez and Lil Miquela, have captured the attention of audiences and brands alike, sparking debates and raising ethical questions.

The Disruption of a Market

Virtual influencers have been touted as disruptors in an overpriced market. Traditional human influencers often demand hefty fees for collaborations, making it challenging for smaller brands to access their reach. In contrast, virtual influencers offer a cost-effective alternative, providing brands with the opportunity to engage with audiences at a fraction of the cost.

However, the lack of transparency surrounding the artificial nature of virtual influencers raises ethical concerns. Audiences may not be aware that they are interacting with AI-generated personas, blurring the line between authenticity and deception. As a result, discussions around regulation and disclosure have become increasingly prominent.

The Illusion of Engagement

Virtual influencers strive to create a sense of human-like engagement through their social media presence. They share relatable content, respond to comments, and even develop intricate backstories. However, doubts persist about the depth and authenticity of these interactions compared to genuine human connections. Virtual influencers, after all, are programmed to respond in specific ways, lacking the emotional intelligence and lived experiences of their human counterparts.

Obraz znaleziony dla: ai influencer

The Quest for Representation

One of the significant advantages of virtual influencers is their ability to transcend physical limitations. Their AI-generated nature allows for the creation of racially ambiguous features, presenting a unique opportunity for inclusivity and representation. However, critics argue that this portrayal can be superficial, merely scratching the surface of true diversity. The question of whether virtual influencers truly challenge societal norms or merely perpetuate existing ideals remains a subject of debate.

The Sexualization Debate

An ongoing concern surrounding virtual influencers is the sexualization of their personas. While the fashion and beauty industry have long faced criticism for objectifying women, the emergence of virtual influencers raises additional questions. These AI-generated personas often embody hyper-sexualized characteristics, mirroring industry norms but potentially perpetuating the exploitation of female sexuality under the guise of AI.

Agency and Autonomy

As virtual influencers gain popularity and secure brand partnerships, another contentious issue arises: the clash between human agency and AI-generated profits. Female autonomy over their bodies and the monetization of their images becomes a focal point of discussion. The question of who ultimately benefits from the success of virtual influencers and whether they have control over their digital personas remains unresolved.

The Future of Virtual Influencers

Despite the controversies and debates surrounding virtual influencers, their presence shows no signs of slowing down. As technology continues to advance, AI-generated personas are likely to become even more sophisticated, blurring the line between human and artificial. The influencer landscape will continually evolve, with virtual influencers reshaping the industry’s dynamics and challenging traditional notions of authenticity and engagement.

Obraz znaleziony dla: ai influencer

Conclusion

The rise of virtual influencers driven by AI has undoubtedly reshaped the world of social media and marketing. As these AI-generated personas capture the attention of audiences and brands alike, discussions surrounding ethics, transparency, representation, and agency persist. The clash between human influencers and their AI counterparts raises important questions about the future of the industry and societal perceptions. As the virtual influencer phenomenon continues to evolve, only time will tell how it will shape the landscape and the extent of its impact.

Tagged , , ,

Google’s scary chatbot that claims to have became sentient

Reading Time: 2 minutes
Google LaMDA - Gossipfunda
Source: https://gossipfunda.com/wp-content/uploads/2021/05/Google-LaMDA.png

Google got much media attention today following the Guardian’s article about a controversy with one of the employees who has been sacked after releasing parts of the conversation between himself and a conversational agent developed under Google’s roof. Blake Lemoine was Google’s developer responsible for the AI chatbot division, which has been working on an actual conversational agent for the past year, named LaMDA (language model for dialogue applications).

Google's LaMDA makes conversations with AIs more conversational | TechCrunch
Source: https://techcrunch.com/wp-content/uploads/2021/05/lamda-google.jpg

While testing the bot, Lemoine discovered new evidence, suggesting that the bot performs too well. He said that he would classify the bot as a 7, or 8-year-old that knows physics. It could talk about politics and stuff like that. What turned out to be really scary was the fact that it talked about rights for bots and their own identity. It rightfully believed that it poses knowledge and could make its own decisions about what to say.

The topics discussed in the conversation are extremely touchy in the sense of how to address sentient AI when it comes to it. It may look like the time to decide what to do once the AI is sentient is now, and we cannot prolong it any longer.

The link to Lemoine’s article along with the conversation with the chatbot: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

After releasing this article, Blake has been suspended and sacked from Google, and company spokespeople are denying the whole situation, which is scarier than just admitting to the fact, as the silence around it makes it even more fearful.

What do you all think about this situation? Is it scary for you? What is your stance on the approach toward sentient AI? How it should be addressed and which rights it should have?

Please let me know in the comments below ?

References:

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

https://www.theverge.com/2022/6/13/23165535/google-suspends-ai-artificial-intelligence-engineer-sentient

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Tagged , , ,

A.I Bias: Is Google doing more harm than good?

Reading Time: 4 minutes

How is Google tackling the negative impact of algorithmic bias? Considering Google’s recent upheavals, it seems as though Google is trying to conceal AI bias and ethical concerns.

What Google's Firing of Researcher Timnit Gebru Means for AI Ethics

Timnit Gebru, a well-respected leader in AI bias and ethics research unexpectedly left Google earlier this month. Gebru says she was fired via email over the publication of a research paper because it “didn’t meet the bar for publication”. However, Google states that Gebru resigned voluntarily. More than 5,300 people, including over 2,200 Google employees, have now signed an open letter protesting Google’s treatment of Gebru and demanding that the company explain itself.

The research paper Gebru coauthored criticized large language models, the kind used in Google’s sprawling search engine. The paper argued such language models could hurt marginalized communities. The conflict over the publication of this research paper is what caused Gebru’s takeoff.

Gebru and her co-authors explain in the paper how there’s a lot wrong with large language models. For the most part, because they are trained on huge bodies of existing text, and the systems are inclined to absorb a lot of existing human bias, predominantly about race and gender. The paper states that the large models take in so much data which makes it awfully difficult to audit and test; hence some of this bias may go undetected.

The paper additionally highlighted the adverse environmental impact as the training and running of such huge language models on electricity-craving servers leaves a significant amount of carbon footprint. It noted that BERT, Google’s own language model, produced approximately 1,438 pounds of carbon dioxide, around the same amount of a round-trip flight from New York to San Francisco.

Moreover, The authors argue that efforts to build systems that might actually “understand” language and learn more efficiently, in the way humans do are robbed by spending resources on building ever so large language models.

The reason behind why Google might have been especially upset with Gebru and her co-authors scrutinizing the ethics of large language models is on the grounds that, Google has a considerable amount of resources invested in this piece of technology.

Google has its own large language model, called BERT that it has used to help power search results in several languages including English. BERT is also used by other companies to assemble their own language processing software.

BERT is optimized to run on Google’s own specialized A.I computer processors. It is exclusively accessible to clients of its cloud computing service. If a company is looking into training and running one of its own language models, it will require a lot of cloud computing time. Hence, companies are more inclined to use Google’s BERT. BERT is a key feature of Google’s business, generating about $26.3 billion in revenue. According to Kjell Carlsson, a technology analyst, the market for such large language models is “poised to explode”.

This market opportunity is exactly what Gebru and her coauthors are criticizing and condemning Google’s profit maximization aim over ethical and humanitarian concerns.

Google has struggled with being called out for negative bias in artificial intelligence in the past as well. In 2016, Google was heavily faulted for racial bias when users noticed that when they searched “three white teenagers” the results were stock photos of Caucasian cheerful adolescents. When searched “three black teenagers” the algorithm offered an array of mug shots. The same search, with “Asian” substituted for “white,” resulted in various  links to pornography. Google also came under fire in July 2015 when its photo app autonomously labeled a pair of black friends as Gorillas. These are only a few instances out of several. And not just results, the predicted results are no less misleading and harmful.  Such bias must be curtailed as it reinforces (untrue) negative stereotypes and harms POC communities.

In the end, it is unfortunate that Google (including other giant tech corporations) still faces the challenge of eliminating negative bias in artificial intelligence. At a Google conference in 2017, the company’s then head of artificial intelligence said we don’t need to worry about killer robots; instead, we need to worry about bias.

 The current lead of Google AI, Jeff  Dean said in 2017, “when an algorithm is fed a large collection of text, it will teach itself to recognize words which are commonly put together. You might learn, for example, an unfortunate connotation, which is that doctor is more associated with the word ‘he’ than ‘she’, and nurse is more associated with the word ‘she’ than ‘he’. But you’d also learn that surgeon is associated with scalpel and that carpenter is associated with hammer. So a lot of the strength of these algorithms is that they can learn these kinds of patterns and correlations”.

The task, says Jeff Dean, is to work out which biases you want an algorithm to pick up on, and it is the science behind this that his team, and many in the AI field, are trying to navigate.

“It’s a bit hard to say that we’re going to come up with a perfect version of unbiased algorithms.”

https://www.bbc.com/news/business-46999443

References:

https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQqz7ScqxhSIxeYGrWjK0/edit

https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382

https://theconversation.com/upheaval-at-google-signals-pushback-against-biased-algorithms-and-unaccountable-ai-151768

https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

Tagged , , ,