Tag Archives: AI ethics

The False Dichotomy in AI Governance: Beyond Centralization and Decentralization

Reading Time: 3 minutes
A visually striking image depicting a balance scale with glowing futuristic AI elements on one side and a human hand with ethical symbols on the other, symbolizing the balance between innovation and ethical governance in artificial intelligence. The background includes subtle interconnected digital networks with a harmonious glowing atmosphere, enhancing clarity and focus on the scale.

It’s time to cut through the noise surrounding artificial intelligence (AI) governance. While tech leaders and media often frame it as a battle between centralization and decentralization, the reality is more complex. Based on real-world implementations, it is evident that organizations succeed by adopting a more balanced approach.

The Reality: Current Landscape

Organizations often waste significant time debating whether to centralize or decentralize AI development. The truth? They’re asking the wrong question.

Consider the evolution of the internet. Early debates revolved around whether it should be controlled by governments, corporations, or function as a completely free network. What emerged was a complex ecosystem of interconnected systems, each serving different needs while maintaining interoperability standards.

This pattern repeats across technological revolutions. During the development of cloud computing, similar debates occurred between advocates of public and private clouds. Today, hybrid approaches dominate. In mobile app development, tensions between native and web-based approaches ended with a pragmatic compromise.

Lessons from Practice

The past year has provided fascinating case studies on managing these challenges. Take the healthcare sector, where a major hospital network revolutionized its AI implementation approach. Instead of choosing between centralized control and departmental autonomy, they created a multi-level governance system tailored to risk levels and use cases.

Similarly, a global manufacturing company succeeded by implementing what they call “guided autonomy”—providing clear frameworks while allowing individual units to innovate within boundaries. Their approach has since been adopted by organizations across multiple industries.

A New Perspective: Federated AI Governance

Based on observations, one striking trend emerges: the most successful organizations do not take sides—they transcend the debate. They build “federated AI governance” frameworks that:

  1. Establish clear safety guidelines without stifling innovation,
  2. Enable rapid development while maintaining accountability,
  3. Foster collaboration without compromising security,
  4. Scale oversight naturally with growth,
  5. Balance local autonomy with global standards,
  6. Create feedback loops between governance and implementation,
  7. Adapt to evolving technologies and regulations.

Practical Implications for Modern Organizations

Here’s where theory meets practice. Traditional management wisdom suggests centralization enables better control. But consider how modern organizations operate. Netflix’s famous culture deck emphasizes “context, not control.” Spotify’s Squad model balances autonomy with alignment. These are not coincidences—they address the reality that innovation requires both structure and freedom.

Let’s analyze how this plays out in various organizational functions:

Research and Development

  • Centralized safety standards and ethical guidelines,
  • Decentralized experimentation and innovation,
  • Knowledge-sharing systems,
  • Cross-functional review processes.

Operations

  • Core infrastructure standards,
  • Flexibility in local implementation,
  • Scalable oversight mechanisms,
  • Adaptive control systems.

Risk Management

  • Global risk assessment frameworks,
  • Local risk monitoring,
  • Real-time feedback systems,
  • Collaborative mitigation strategies.

Case Studies of Successful Implementation

Technology Sector

A leading software company recently revamped its AI governance structure, moving from a traditional hierarchical model to a network-based approach. The results were striking: 40% faster deployment times while maintaining rigorous safety standards.

Financial Services

A global bank implemented a hybrid governance model, reducing compliance issues by 60% while accelerating innovation cycles. Their approach combines centralized risk management with distributed development teams.

Manufacturing

A federated AI implementation approach by an automotive supplier led to a 30% improvement in process efficiency while strengthening quality control measures.

The Path Forward: Building Adaptive Organizations

Rather than getting stuck in philosophical debates about centralization vs. decentralization, smart organizations focus on building adaptive capabilities. They learn from historical patterns while addressing the unique challenges posed by AI.

The future belongs to organizations that can:

  • Build flexible oversight mechanisms,
  • Foster genuine cross-functional collaboration,
  • Create meaningful feedback loops between development and governance,
  • Adapt their approach based on real-world outcomes,
  • Balance innovation with responsibility,
  • Scale governance effectively,
  • Maintain organizational agility.

Implementation Framework

To move toward a more balanced approach, organizations should consider:

Assessment Phase

  • Evaluate current governance structures,
  • Identify pain points and bottlenecks,
  • Map stakeholder needs and concerns,
  • Analyze the risk landscape.

Design Phase

  • Create flexible governance frameworks,
  • Define clear roles and responsibilities,
  • Establish communication channels,
  • Develop feedback mechanisms.

Implementation Phase

  • Start with pilot programs,
  • Gather real-world data,
  • Adjust based on outcomes,
  • Scale successful approaches.

Conclusion

The next step in AI governance is not about choosing between centralization and decentralization—it is about building organizations capable of dynamic adaptation. Success will come to those who can balance structure with flexibility, control with innovation, and global standards with local needs.

References

  1. Harvard Business Review. (2024, December). The evolution of tech governance. Retrieved from https://hbr.org/2024/12/the-evolution-of-tech-governance
  2. MIT Technology Review. (2024, October). Innovation at scale. Retrieved from https://www.technologyreview.com/2024/10/innovation-at-scale
  3. California Management Review. (2024, November). Rethinking organizational design. Retrieved from https://cmr.berkeley.edu/2024/11/rethinking-organizational-design
  4. Communications of the ACM. (2024, September). Lessons from open source. Retrieved from https://cacm.acm.org/2024/09/lessons-from-open-source
  5. Strategy+Business. (2024, August). The future of corporate innovation. Retrieved from https://www.strategy-business.com/2024/08/the-future-of-corporate-innovation
  6. Sloan Management Review. (2024, November). Adaptive governance in practice. Retrieved from https://sloanreview.mit.edu/2024/11/adaptive-governance-in-practice
  7. McKinsey Quarterly. (2024, October). Building resilient organizations. Retrieved from https://www.mckinsey.com/2024/10/building-resilient-organizations
  8. Forbes Technology Council. (2024, December). The new rules of innovation. Retrieved from https://www.forbes.com/2024/12/the-new-rules-of-innovation

Written with help of Claude

Image generated by DALL-E


Tagged , ,

“When Machines Take the Mic: The AI Experiment at OFF Radio Kraków”

Reading Time: 3 minutes
Alex Szulc to jedna z trzech postaci wygenerowanych przez AI, która będzie "prowadzić" audycje w kanale Off Radio

Artificial Intelligence is increasingly making its presence felt across various professions, replacing humans in both routine and more creative tasks. In the digital age, AI has become a support tool in areas such as data analysis, communication automation, and even in creative arts. While some view this as an exciting vision of the future where machines enhance our lives, others question whether we are approaching a point where technology will eliminate the human element in fields that require emotion and personality. Such dilemmas become particularly pronounced when AI enters cultural and creative industries—one notable example in Poland is OFF Radio Kraków, which has embarked on a bold experiment with AI.

Kraków’s OFF Radio introduced a controversial innovation that sparked a wave of criticism within the media community. Since October 22, 2024, the station, part of public Radio Kraków, began broadcasting shows hosted by characters created by artificial intelligence. Instead of experienced journalists, listeners now hear voices of 20-year-old Emilia, 22-year-old Jakub, and 23-year-old Alex, who do not exist in reality.

Controversy Surrounding Layoffs

The decision to replace human hosts with AI came after the layoffs of several employees, leading to outrage among those connected to Kraków’s culture and media. Former journalists, including Mateusz Demski, expressed their dissatisfaction in an open letter, pointing to a lack of ethical justification for this experiment. Demski emphasized that artificial intelligence cannot replicate human sensitivity and understanding in cultural and social matters.

Management’s Arguments

Marcin Pulit, the liquidator of Radio Kraków, defended the decision to implement AI by stating that layoffs were due to low listenership and content overlap with other stations. Pulit assured that no permanent staff were dismissed solely because of AI. However, many critics argue that this approach is not only unethical but could also lead to further dehumanization of public media.

Research-Media Experiment

The new format of OFF Radio Kraków is intended as an experiment to explore the impact of artificial intelligence on culture and media. However, the lack of concrete collaboration with research institutions raises doubts about the project’s credibility. Many fear this move could set a precedent for other public media outlets in Poland and abroad.

Social Reactions

The decision to replace journalists with AI has faced widespread public opposition. A petition against these changes has garnered over 15,000 signatures. Critics stress that the use of AI in public media should be regulated by legal frameworks to prevent situations where humans are replaced by machines without proper ethical considerations.

Summary and my personal opinion

This week, I attended the Masters&Robots conference, where I really enjoyed Kevin Kelly’s lecture. One statement that particularly struck me was that “Technology is like thinking. If you have a bad idea, nobody will say you to stop thinking. Instead they will suggest you to find a better idea.” While I completely agree with this, I believe our society is not prepared for moves similar to those proposed by OFF Radio Kraków. In my opinion, now is not the right time to implement such drastic changes, as people will react very negatively. Many will focus primarily on the fact that journalists have lost their jobs and will fear for their own future. I believe a better solution would be to gradually introduce changes. Initially, there could be collaboration between AI and journalists, allowing programs to be co-hosted by both artificial intelligence and humans. Only after analyzing audience reactions should further changes be made. I think this approach would face significantly less criticism.

This blog post was generated with assistance from Perplexity





References:

https://businessinsider.com.pl/wiadomosci/off-radio-krakow-ma-nowych-prowadzacych-stworzyla-ich-ai/k75fkbl

https://spidersweb.pl/2024/10/off-radio-krakow-sztuczna-inteligencja.html

https://cyberdefence24.pl/technologie/ai-zamiast-dziennikarzy-fala-krytyki-po-eksperymencie-off-radio-krakow

https://wydarzenia.interia.pl/zagranica/news-echa-zmian-w-off-radio-krakow-komentarze-zagranicznych-medio,nId,7842572

https://marketingprzykawie.pl/espresso/sztuczna-inteligencja-w-off-radio-krakow-wywolala-kontrowersje/

https://cyberdefence24.pl/technologie/ai-w-off-radio-krakow-mamy-stanowisko-ministerstwa-kultury

Tagged ,

AI Influencers market

Reading Time: 3 minutes
Obraz znaleziony dla: ai influencer

In the ever-evolving landscape of social media and marketing, a new phenomenon has emerged: virtual influencers. These AI-generated personas, such as Aitana Lopez and Lil Miquela, have captured the attention of audiences and brands alike, sparking debates and raising ethical questions.

The Disruption of a Market

Virtual influencers have been touted as disruptors in an overpriced market. Traditional human influencers often demand hefty fees for collaborations, making it challenging for smaller brands to access their reach. In contrast, virtual influencers offer a cost-effective alternative, providing brands with the opportunity to engage with audiences at a fraction of the cost.

However, the lack of transparency surrounding the artificial nature of virtual influencers raises ethical concerns. Audiences may not be aware that they are interacting with AI-generated personas, blurring the line between authenticity and deception. As a result, discussions around regulation and disclosure have become increasingly prominent.

The Illusion of Engagement

Virtual influencers strive to create a sense of human-like engagement through their social media presence. They share relatable content, respond to comments, and even develop intricate backstories. However, doubts persist about the depth and authenticity of these interactions compared to genuine human connections. Virtual influencers, after all, are programmed to respond in specific ways, lacking the emotional intelligence and lived experiences of their human counterparts.

Obraz znaleziony dla: ai influencer

The Quest for Representation

One of the significant advantages of virtual influencers is their ability to transcend physical limitations. Their AI-generated nature allows for the creation of racially ambiguous features, presenting a unique opportunity for inclusivity and representation. However, critics argue that this portrayal can be superficial, merely scratching the surface of true diversity. The question of whether virtual influencers truly challenge societal norms or merely perpetuate existing ideals remains a subject of debate.

The Sexualization Debate

An ongoing concern surrounding virtual influencers is the sexualization of their personas. While the fashion and beauty industry have long faced criticism for objectifying women, the emergence of virtual influencers raises additional questions. These AI-generated personas often embody hyper-sexualized characteristics, mirroring industry norms but potentially perpetuating the exploitation of female sexuality under the guise of AI.

Agency and Autonomy

As virtual influencers gain popularity and secure brand partnerships, another contentious issue arises: the clash between human agency and AI-generated profits. Female autonomy over their bodies and the monetization of their images becomes a focal point of discussion. The question of who ultimately benefits from the success of virtual influencers and whether they have control over their digital personas remains unresolved.

The Future of Virtual Influencers

Despite the controversies and debates surrounding virtual influencers, their presence shows no signs of slowing down. As technology continues to advance, AI-generated personas are likely to become even more sophisticated, blurring the line between human and artificial. The influencer landscape will continually evolve, with virtual influencers reshaping the industry’s dynamics and challenging traditional notions of authenticity and engagement.

Obraz znaleziony dla: ai influencer

Conclusion

The rise of virtual influencers driven by AI has undoubtedly reshaped the world of social media and marketing. As these AI-generated personas capture the attention of audiences and brands alike, discussions surrounding ethics, transparency, representation, and agency persist. The clash between human influencers and their AI counterparts raises important questions about the future of the industry and societal perceptions. As the virtual influencer phenomenon continues to evolve, only time will tell how it will shape the landscape and the extent of its impact.

Tagged , , ,

Google’s scary chatbot that claims to have became sentient

Reading Time: 2 minutes
Google LaMDA - Gossipfunda
Source: https://gossipfunda.com/wp-content/uploads/2021/05/Google-LaMDA.png

Google got much media attention today following the Guardian’s article about a controversy with one of the employees who has been sacked after releasing parts of the conversation between himself and a conversational agent developed under Google’s roof. Blake Lemoine was Google’s developer responsible for the AI chatbot division, which has been working on an actual conversational agent for the past year, named LaMDA (language model for dialogue applications).

Google's LaMDA makes conversations with AIs more conversational | TechCrunch
Source: https://techcrunch.com/wp-content/uploads/2021/05/lamda-google.jpg

While testing the bot, Lemoine discovered new evidence, suggesting that the bot performs too well. He said that he would classify the bot as a 7, or 8-year-old that knows physics. It could talk about politics and stuff like that. What turned out to be really scary was the fact that it talked about rights for bots and their own identity. It rightfully believed that it poses knowledge and could make its own decisions about what to say.

The topics discussed in the conversation are extremely touchy in the sense of how to address sentient AI when it comes to it. It may look like the time to decide what to do once the AI is sentient is now, and we cannot prolong it any longer.

The link to Lemoine’s article along with the conversation with the chatbot: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

After releasing this article, Blake has been suspended and sacked from Google, and company spokespeople are denying the whole situation, which is scarier than just admitting to the fact, as the silence around it makes it even more fearful.

What do you all think about this situation? Is it scary for you? What is your stance on the approach toward sentient AI? How it should be addressed and which rights it should have?

Please let me know in the comments below ?

References:

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

https://www.theverge.com/2022/6/13/23165535/google-suspends-ai-artificial-intelligence-engineer-sentient

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Tagged , , ,

A.I Bias: Is Google doing more harm than good?

Reading Time: 4 minutes

How is Google tackling the negative impact of algorithmic bias? Considering Google’s recent upheavals, it seems as though Google is trying to conceal AI bias and ethical concerns.

What Google's Firing of Researcher Timnit Gebru Means for AI Ethics

Timnit Gebru, a well-respected leader in AI bias and ethics research unexpectedly left Google earlier this month. Gebru says she was fired via email over the publication of a research paper because it “didn’t meet the bar for publication”. However, Google states that Gebru resigned voluntarily. More than 5,300 people, including over 2,200 Google employees, have now signed an open letter protesting Google’s treatment of Gebru and demanding that the company explain itself.

The research paper Gebru coauthored criticized large language models, the kind used in Google’s sprawling search engine. The paper argued such language models could hurt marginalized communities. The conflict over the publication of this research paper is what caused Gebru’s takeoff.

Gebru and her co-authors explain in the paper how there’s a lot wrong with large language models. For the most part, because they are trained on huge bodies of existing text, and the systems are inclined to absorb a lot of existing human bias, predominantly about race and gender. The paper states that the large models take in so much data which makes it awfully difficult to audit and test; hence some of this bias may go undetected.

The paper additionally highlighted the adverse environmental impact as the training and running of such huge language models on electricity-craving servers leaves a significant amount of carbon footprint. It noted that BERT, Google’s own language model, produced approximately 1,438 pounds of carbon dioxide, around the same amount of a round-trip flight from New York to San Francisco.

Moreover, The authors argue that efforts to build systems that might actually “understand” language and learn more efficiently, in the way humans do are robbed by spending resources on building ever so large language models.

The reason behind why Google might have been especially upset with Gebru and her co-authors scrutinizing the ethics of large language models is on the grounds that, Google has a considerable amount of resources invested in this piece of technology.

Google has its own large language model, called BERT that it has used to help power search results in several languages including English. BERT is also used by other companies to assemble their own language processing software.

BERT is optimized to run on Google’s own specialized A.I computer processors. It is exclusively accessible to clients of its cloud computing service. If a company is looking into training and running one of its own language models, it will require a lot of cloud computing time. Hence, companies are more inclined to use Google’s BERT. BERT is a key feature of Google’s business, generating about $26.3 billion in revenue. According to Kjell Carlsson, a technology analyst, the market for such large language models is “poised to explode”.

This market opportunity is exactly what Gebru and her coauthors are criticizing and condemning Google’s profit maximization aim over ethical and humanitarian concerns.

Google has struggled with being called out for negative bias in artificial intelligence in the past as well. In 2016, Google was heavily faulted for racial bias when users noticed that when they searched “three white teenagers” the results were stock photos of Caucasian cheerful adolescents. When searched “three black teenagers” the algorithm offered an array of mug shots. The same search, with “Asian” substituted for “white,” resulted in various  links to pornography. Google also came under fire in July 2015 when its photo app autonomously labeled a pair of black friends as Gorillas. These are only a few instances out of several. And not just results, the predicted results are no less misleading and harmful.  Such bias must be curtailed as it reinforces (untrue) negative stereotypes and harms POC communities.

In the end, it is unfortunate that Google (including other giant tech corporations) still faces the challenge of eliminating negative bias in artificial intelligence. At a Google conference in 2017, the company’s then head of artificial intelligence said we don’t need to worry about killer robots; instead, we need to worry about bias.

 The current lead of Google AI, Jeff  Dean said in 2017, “when an algorithm is fed a large collection of text, it will teach itself to recognize words which are commonly put together. You might learn, for example, an unfortunate connotation, which is that doctor is more associated with the word ‘he’ than ‘she’, and nurse is more associated with the word ‘she’ than ‘he’. But you’d also learn that surgeon is associated with scalpel and that carpenter is associated with hammer. So a lot of the strength of these algorithms is that they can learn these kinds of patterns and correlations”.

The task, says Jeff Dean, is to work out which biases you want an algorithm to pick up on, and it is the science behind this that his team, and many in the AI field, are trying to navigate.

“It’s a bit hard to say that we’re going to come up with a perfect version of unbiased algorithms.”

https://www.bbc.com/news/business-46999443

References:

https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQqz7ScqxhSIxeYGrWjK0/edit

https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382

https://theconversation.com/upheaval-at-google-signals-pushback-against-biased-algorithms-and-unaccountable-ai-151768

https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

Tagged , , ,