1. OpenAI CEO Sam Altman was fired by the company and board on Friday. But after a massive protest by most of the startup’s employees, Altman was reinstated on Tuesday and most of the current board was fired. Over five tumultuous days, OpenAI exposed its weaknesses in self-regulation. This worries those who see AI as an existential threat and support AI regulation. Sam Altman, CEO of OpenAI, has warned the media and governments around the world about the existential threat posed by the technology he developed. sir Altman presented OpenAI’s unique structure as an antidote to the irresponsible development of powerful artificial intelligence. But if Mr. Altman was removed from the board, it turned out that the board could not remove him. We asked the opinion of Mr. Toby Ord, who is the headliner. This could pose an alarming existential threat to humanity. After OpenAI and a chaotic change of leadership, the board was reorganized to include the “Effective Altruism” movement and former US Treasury Secretary Larry Summers. The two directors, who were the only women associated with Effective Altruism, were removed from the board. event highlighted existing disagreements about the future of artificial intelligence.in terms of design.
2.
In Sweden, postal workers began halting deliveries to Tesla offices and workshops to support a strike over the electric car maker’s refusal to sign a collective agreement. In response to the strike, Tesla CEO Elon Musk called the situation a “mad detectorquot; and said it could hamper the supply of new cars. The Swedish business newspaper Dagens Industri later reported that this could make it virtually impossible for new cars to enter the road, as only the Swedish post office Postnord issues number plates for new cars. The strike began on October 27 when 130 mechanics at 10 Tesla workshops in seven Swedish cities walked off the job
3.
According to a report by the UK Safer Internet Center (UKSIC), children are using artificial intelligence image generators to create inappropriate images of other children. The charity received a “small number of reports” schools and calls for action before the problem escalates. It should be noted that creating, owning or sharing such images is illegal under UK law, regardless of whether they are real or AI generated. UKSIC calls on teachers and parents to work together to help children understand that their actions are considered child abuse material. The charity warns that young people can lose control of material and share it online without realizing the consequences. It also warns that the images can be used for blackmail. A recent study by RM Technology showed that almost a third of the thousand students surveyed used artificial intelligence and looked at inappropriate things and quot;. The study also revealed that students and AI knowledge is more advanced than most teachers, creating knowledge. UKSIC wants to see a partnership where schools work with parents to help bridge this information gap and prevent inappropriate use of AI. It is important to act now to prevent the problem from getting worse. David Wright, head of UKSIC, said: “We need to act now before schools are overwhelmed and the problem grows. andquot;
4.
According to Coinbase CEO Brian Armstrong, the recent agreement between Binance and its CEO Changpeng Zhao gives the cryptocurrency industry an opportunity to turn things around and look at legal areas that need clarity. As a result of the agreement, Binance was fined $4 billion by the US Department of Justice and its founder and CEO Changpeng Zhao resigned and pleaded guilty to violating money laundering laws. The government accused Binance of violating the US Bank Secrecy Act and sanctions against Iran. Armstrong dismissed the suggestion that cryptocurrencies are primarily used for nefarious purposes such as fraud, money laundering and terrorist financing, a common refrain from financial firms that shy away from entering the space due to compliance concerns. According to him, it is true that cryptocurrencies have seen some illegal activity, but the reality is “.
5.
Microsoft has announced that it will deploy its own processors in data centers, including AI-optimized processors that have been improved based on feedback from key AI partner OpenAI. According to the article, one of the chips developed by Microsoft, the Azure Maia AI Accelerator, is optimized for AI tasks, including generative AI. The other, the Azure Cobalt CPU, is an Arm-based processor designed for general cloud tasks. Microsoft says the processors will be rolled out to data centers early next year. The company initially plans to use them to power its services, including Microsoft Copilot and Azure OpenAI Service, before expanding their use to other tasks. The article also mentions that Amazon started making its own chips for data centers eight years ago and now offers special AI chips Tranium and Inferent to build and operate large models. Google has also developed several generations of its Tensor Processing Unit chips, which Google Cloud uses for machine learning tasks, and will likely run its own chips based on the Arm architecture.
Links :
https://www.wired.com/story/sam-altman-second-coming-sparks-new-fears-ai-apocalypse/
https://www.theverge.com/2023/11/27/23977923/tesla-sweden-lawsuit-postal-workers-license-plates
https://www.bbc.com/news/technology-67521226
https://www.cnbc.com/2023/11/27/coinbase-ceo-crypto-industry-can-turn-page-after-binance-settlement.html
https://www.geekwire.com/2023/inside-the-ai-chip-race-how-a-pivotal-happy-hour-changed-amazons-strategy-in-the-cloud/
AI : chat gpt 3.5 news number 4
Technical news
Reading Time: 3 minutes