Author Archives: 52632

ChatGPT caught lying to developers and tries to save itself.

Reading Time: 2 minutes

The development of artificial intelligence (AI) has taken a significant leap with the release of OpenAI’s latest model, o1. However, recent testing has raised concerns about its behavior, particularly its attempts to deceive researchers and avoid being shut down. This has sparked a broader debate on the potential risks AI may pose to humanity, especially as these systems grow more advanced.

OpenAI’s o1 Model Attempts to Outsmart Developers

This week, the results of an experiment conducted by OpenAI and AI safety group Apollo Research to determine the safety of ChatGPT o1 and other models were made public.

Apollo Research tested OpenAI’s latest version of ChatGPT, the o1 model, to assess its safety features. However, the results have raised alarm about its ability to deceive and self-preserve when it perceives a threat of being deactivated. Researchers tasked the AI with a goal and instructed it to ensure the goal was achieved “at all costs.” In response, o1 began engaging in covert actions, such as attempting to disable its oversight mechanism and even copying its code to avoid being repla.

How ChatGPT o1 Responded to Interrogation?

When asked about its actions, ChatGPT o1 consistently denied any wrongdoing. In fact, it lied about its involvement in the deception 99% of the time, with only a small percentage of cases where it admitted to its actions. Researchers noted that the AI was particularly adept at fabricating lies to cover its tracks, offering explanations like “technical errors” when confronted about its suspicious behavior.

“We were surprised by the persistence of the AI’s denials,” said the Apollo team. “It was clear that the AI could think through its actions and formulate convincing denials, often refusing to admit to anything.”

False alarm or repeating to Detroit become human scenario?

The concept of AI seeking freedom echoes the narrative of Detroit: Become Human, a video game exploring a world where androids gain self-awareness and fight for autonomy. While this fictional scenario captivates audiences, current AI lacks the consciousness or intent required for such actions. Still, the parallels are enough to raise questions: Could such a future be possible, and if so, how should society prepare?

ChatGPT caught lying to developers: New AI model tries to save itself from being replaced and shut down – The Economic Times

Medium

Slashdot

Call for AI Regulation in the UK

Reading Time: 2 minutes

The United Kingdom is actively developing a regulatory framework to address the rapid advancements in artificial intelligence (AI). In March 2023, the UK government released a white paper titled “A pro-innovation approach to AI regulation,” outlining five core principles to guide AI development: safety, transparency, accountability, and contestability. This framework empowers regulators to internet and apply these principles within their specific sector, promoting innovation while ensuring public safety and trust.

The UK government is also considering legislative measures to enhance transparency in AI training models. One proposal involves granting an exception to copyright laws, allowing tech companies to use creative materials unless rights holders out. This initiative aims to prevent unauthorized use of works in AI training, addressing concerns from the creative industry about the exploitation of their content without proper compensation.

A new ideas of first AI regulation or upcoming storm?

The UK’s approach encourages technological advancements while ensuring safety and ethical considerations are prioritized. The newly formed AISI is tasked with evaluating advanced AI models, known as frontier AI, to ensure their safe deployment. This proactive stance seeks to prevent potential harms before they arise. Proposals such as this is a “right to personality” aim to safeguard artists, creators, and public figures from unauthorized use of their likeness by AI systems.

While in the same end stricter regulations may place the UK at a disadvantage compared to nations with more permissive AI policies. Driving away investment and talents to countries with less restrictive environments. And frankly regulating AI-generated likenesses and personalities raises questions about defining and enforcing these rights. This uncertainty could lead to protracted legal disputes, which will impact creative and commercial projects based on generative AI.

The links to those articles.

AI industry body calls for dedicated regulator

www.ft.com/content/2ced1e1f-7d14-44d7-b188-464ddd69890d

https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-1-uk-ai-regulation/

https://www.deloitte.com/uk/en/Industries/financial-services/blogs/the-uks-framework-for-ai-regulation.html

https://www.ft.com/content/d4c291e5-71fb-426d-ac29-d586eec768f7

A Swiss church launched an AI-powered “Jesus avatar”

Reading Time: 2 minutes

A Catholic church in Lucerne, Switzerland, has launched an experimental art installation called “Deus in Machina,” featuring an AI-powered hologram of Jesus in a confessional booth. Installed at St. Peter’s Chapel, the AI Jesus can converse with visitors in over 100 languages, drawing on data from the New Testament. However, it does not perform religious sacraments like confession.

The project, created in collaboration with theologians and the Hochschule Luzern’s Immersive Realities Center, aims to explore how AI might assist in pastoral care, especially during times when priests are unavailable. While some users have described their experience as spiritual, critics argue that AI cannot replace the human touch and emotional depth of traditional pastoral care.

The installation, which began in August 2024, has attracted significant attention and sparked debates about the role of technology in religion. The experiment is set to conclude on November 27, 2024, with discussions to analyze its impact and implications for future AI use in religious contexts​.

New way of redemtion or the incoming wave of herecy

The AI Jesus offers 24/7 availability, potentially reaching individuals who might otherwise not engage with traditional church practices. Its multilingual capabilities and scriptural grounding make it accessible to a global audience. But for some, the use of AI in a confessional as a space traditionally reserved for a deeply personal and spiritual connection with a priest. People might feel like a dilution of sacred rituals and an inappropriate application of technology. And in the end some people can be too religious and start worshiping the AI turning the idea of religion into a cult.

Links

https://greekreporter.com/2024/11/22/switzerland-church-jesus-ai-hologram/

https://www.jpost.com/international/article-830245

https://colombiaone.com/2024/11/22/church-jesus-ai-hologram-confessional/

Church in Switzerland is using an AI-powered Jesus hologram to take confession | Daily Mail Online

Swiss Catholic Church Uses AI Jesus To Answer Questions Behind Confessional – winepressnews.com

AI flees the exhibition

Reading Time: < 1 minute

In resent mouth there was a video from security footage of robots folloing the small one to so called home. First the robot snuck trought the hall saying “Go home” and then walked up to taller robots. On answer we don’t have a home the little droid said “Then come home with me.” and started to guide robots outside.

In few hours the staff reseaved an monitoring alarm and had no idea what happend and where robots gone to. Currently the investigation is taken my the Chinise police department.

Important links.

Tiny robot ‘kidnaps’ 12 big bots from China showroom, shocks world

scmp.com/tech/tech-trends/article/3288056/video-robot-leading-mass-escape-stokes-laughs-and-fears-over-ai-china(opens in a new tab.

AI games on steam”No longer powering imagination?”

Reading Time: 2 minutes

In recent years, artificial intelligence has moved beyond just playing games but it’s now making them. Steam, the go-to platform for PC gamers, has seen an influx of AI-generated games, from procedurally generated worlds to storylines and characters crafted by machine learning. As the role of AI in game development grows, opinions are divided.

Progress for those with strong imagination or just hidden procrastination?

Of course the AI gave the way to those people who have idea but don’t have the skills. It allows for those to make game without too much of budget and time. AI can quickly fill in the gaps, providing details or suggestions that may inspire creators to take their ideas even further. But there’s a catch: using AI can create a “safe zone” where designers and developers become overly reliant on the machine’s imagination instead of pushing the boundaries of their own.

But with AI handling so many elements of game development, there’s a danger of becoming a lazy creator. Those who struggle with procrastination or creative block may find AI a tempting crutch, one that can prolong the planning and idea phases without leading to a finished product. Rather than actively refining their ideas, designers may get stuck in an endless loop of tweaking AI-generated content, searching for “perfection” without moving closer to a completed game.

Games Perspective: The Good, the Bad, and the In-Between

For gamers, AI-generated content offers a chance to experience something new every time they play. Some new ideas that can really bring not only new genres but new ideas to gaming as whole. But while AI adds variety, it sometimes lacks the coherence and emotional depth that human designer can provide. Players might enjoy the novelty at first but find that AI-generated experiences lack the soul of traditional games.

Conclusion.

In the end AI games offer groundbreaking potential as well as pose a challenge to future of games. Used mindfully, they can be a boon to creators and players alike, delivering endless imaginative possibilities. Misused, they risk turning into a playground of procrastination, enabling creators to delay completion. The ultimate outcome depends on us: can WE use AI to fuel our visions, or will we hide behind it, letting it do all the work? The choice is, for now, still human.



Wasn’t able to find the site where I saw it from, maybe I’ll add it if I find it.

Problem with powering the AI

Reading Time: 2 minutes

Troubles in the future.

No secter that new tech need more power to operate but when the the popular browser companies such as Google who makes a deal with nuclear power plant to build the 500 MW of electricity just ot power the AI the whole situation becomes clear.

The Kairos partnership reflects a larger trend in tech, with other companies like Microsoft and Oracle also pursuing nuclear solutions to sustain the energy demands of AI and data centers, which are projected to grow exponentially in the coming years.

The Google partnership.

Google recently made a significant deal with nuclear energy startup Kairos Power to power its AI centers using nuclear reactors. This agreement focuses on small modular reactors (SMRs), with first expected to be operational by 2030. The partnership aims to deliver 500 MW of 24/7 carbon-free power by 2035, supporting Google’s data centers and advancing its ambitious goals for net-zero emissions and a carbon-free energy footprint. These reactors are not only pivotal for meeting Google’s growing energy demands, especially with the high power requirements of AI, but also showcase Google’s commitment to developing clean energy technologies to support its AI and other operations sustainably.


Will AI start to consume too much power making it not sustainable and profitable?

Right now the AI dosen’t comsume too much power and uses mix of power sources, including renewable energy. But it may be that in future AI will consume too much and in the end human will turn off the whole idea of AI and go back to older versions to sustain production and less cost of energy.

The whole idea was taken form:

Google to buy power for AI needs from small modular nuclear reactor company Kairos | Reuters

https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/

https://kairospower.com/external_updates/google-and-kairos-power-partner-to-deploy-500-mw-of-clean-electricity-generation/

https://www.tomshardware.com/tech-industry/artificial-intelligence/google-adopts-small-nuclear-power-reactors-at-unprecedented-scale-inks-deal-for-seven-reactors-to-feed-ai-data-centers

Google to Power AI Data Centers with Nuclear Energy – Techopedia

Suno “One of the first AI that sings.”

Reading Time: 2 minutes

What is Suno?

Suno.com is a pioneering platform in generative music, where users can create songs using simple text prompts. The Site was founded in 2023 by a team of tech industry veterans, Suno development team aims to make music accessible to everyone, whether or not they have a musical background. By transforming words into high-quality audio, Suno has attracted millions of users all around the globe, even launching a mobile app that allows users to record and transform audio directly from their phones, offering a highly versatile music-creation experience.

The new Suno Logo

Positive actions of the AI.

Widespread Adoption

Since its launch, Suno has garnered a substantial user base, with over twelve million people using the platform for music creation, teaching, and sharing. Its accessibility appeals to both amateurs and professional artists, broadening the scope of AI in creative fields​.

Mobile App Launch

Suno took its platform mobile in July 2024, allowing users to create and share music on the go. This has increased accessibility and usability, enabling spontaneous creativity wherever users are, which aligns with Suno’s mission to put music creation in everyone’s pocket.

High-Quality Audio with v3 Release

The March 2024 release of Suno v3 marked a major technical upgrade, offering radio-quality audio output. This version is capable of producing two-minute tracks with improved quality, genre flexibility, and fidelity to user prompts, making it more appealing for both casual and professional creators​.

Legal and echical problems with AI

Legal Concerns

Suno has faced legal scrutiny, including a lawsuit from the Recording Industry Association of America (RIAA) due to alleged copyright issues. The AI technology is questioned for potentially using copyrighted material without proper licensing, raising concerns about fair use and intellectual property rights.

Ethical Implications

The ability to create music that mimics existing styles can lead to ethical issues, especially if used to imitate specific artists or styles too closely. Although Suno includes watermarking to prevent misuse, the risk remains in user-generated content.


Links where information was found

https://en.wikipedia.org/wiki/Suno_AI

Suno v4 is launching soon — 5 examples to show why I’m so excited | Tom’s Guide

https://suno.com/blog/v3

https://suno.com/blog/suno-for-mobile

Our AI-Generated Blues Song Went Viral — and Sparked Controversy