Tag Archives: google

Expensive Google’s mistake

Reading Time: 2 minutes
Google Bard AI mistake just cost Google over $100 billion | Tom's Guide

 Let’s start with the fact that the inhabitants of the world of information technology did not have time to discuss the new chat bot ChatGPT, which they had already dubbed the “Google killer”, as a competitor launched its own bot Bard, also based on the principle of artificial intelligence.
Bard is based on Google’s already existing large language platform called Lama, which is said to be so human-like in its responses that it seems reasonable.

On the day of the presentation of a new chatbot, Google makes a fatal mistake that costs them $100 billion.

On February 6, Google announced its AI chat-bot Bard, a ChatGPT competitor from OpenAI, which will be available in the coming weeks. However, in the promotional video, the technology made a mistake by providing false information as a result of the request.

Google has published a GIF in which the chatbot answers the question: “What new discoveries of the James Webb Space Telescope can I tell my 9-year-old child about?”. Bard offered three options, including the claim that the telescope “took the first photographs of a planet outside the solar system.”

Astronomers immediately reacted to the presentation and NASA wrote about this situation on their website. The thing is that the first picture of the exoplanet was taken back in 2004, which indicates errors in the video presentation of the chatbot.

“I’m sure Bard will still amaze us, but for the record: JWST did not take the “first image of a planet outside our solar system,” astrophysicist Grant Tremblay wrote on Twitter.

Bruce McIntosh, director of the UC Observatory, also pointed out the error: “I speak as someone who photographed an exoplanet 14 years before the launch of JWST. Don’t you think a better example could be found?”

Tremblay tweeted that the biggest problem with chatbots like ChatGPT and Bard is their tendency to confidently provide incorrect information as fact. And often they just come up with data, because in essence they are autocomplete systems.

Everyone is well aware that there is already a lot of false information on the Internet, but everything is complicated by the desire of Microsoft and Google to use these tools as search engines. Where the answers of chatbots acquire the authority of a smart machine “know-it-all”.

The promotional video with the error was viewed by 1.6 million people on Twitter. And almost immediately after its publication, Alphabet shares fell by 9%, and the market value decreased by $100 billion.

Sources and references: https://www.bbc.com/news/business-64576225.amp https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo https://amp.cnn.com/cnn/2023/02/08/tech/google-ai-bard-demo-error/index.html

Tagged ,

The end of GOOGLE Empire?

Reading Time: 4 minutes
explostion of google
https://marketing.wtwhmedia.com/wp-content/uploads/2012/03/End-of-Google.jpg

Google dominates the desktop search industry, currently holding 84% of the market share. This monopoly has existed for 7 years, with Google making the majority of its money through search ads (82% of its revenue). However, with the introduction of AI-powered search engines like chat GPT, the industry is undergoing a transformation.

Google was founded in 1998 by Larry Page and Sergey Brin Ph.D. students at Stanford University in California. It offers a diverse range of services and products such as search (what’s best known for), advertising, cloud computing, software, and hardware.

This image has an empty alt attribute; its file name is image-9-1024x647.png
Global desktop market share of search engines 2015-2022 – www.statista.com

Often when we are trying to find something on the internet or find something out we’re googling. This’s so common and already ingrained in us. But it’s search current form hasn’t changed much over the past 20 years.

Just few days the Microsoft CEO Satyla Nadella confirmed its $10 billion investment in Open AI. They want to embed this technology not just in Microsoft web search, but they want to embed it across all their tech products systems/infrastructure like Microsoft Azure and so Word and Excel and all others.

Having a tool on with you type any type of question and it gives you conversional answer seem a threat to current engine leader and make them look old. But there are certain risks with future connection with linking chat, to the internet to Bing. When you google something, when you search it you still need to go through bunch of websites and find it by your own. Use your brain and critical thinking to evaluate which information is the good one. While chat GPT gives you single answer that may not be accurate and you don’t really know where it’s coming from.

Also Microsoft is going to face to the fact that computing power to run chat GPT search vs google search has been estimated 7 times as much bigger. This race may end up costing both companies a lot of money.

How is google reacting to this explosion of interest of chat GPT and Open AI. For sure they are worried about it. Few days ago they has announced Code Red and returned Larry Page and Sergey Brin Google’s founders.

The company knew that AI is the future, therefore acquired DeepMind, a UK-based AI research lab, nearly nine years ago. Google also plans to launch a series of AI programs, including image creation and something that can build and correct code in app development. But apparently they didn’t know that this boom would happen so soon which surprised them. In order to not fall out of the game, already this year, they plan to unveil more than 20 new products and demonstrate a version of its search engine with chatbot features.

They have also been working on something comparable to chat GPT for a long time. The LamDA system’s conversation proved really realistic. The illusion has gotten so good that one employee working on it freaked out thought he was speaking to a conscious being. He wrote. “I increasingly felt like I was talking to something intelligent.”

Emily M. Bender, a linguistics professor at the University of Washington said “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them” . So that they are still just models that mimic human speech. These large language models “learn” by being shown a lot of text and predicting what word will come next, or by being shown text with words missing and filling them in. It will take some time, in my opinion, for them to become conscious and take over the world, whatever that means.

China is also set to play a big role in the AI revolution, with its access to data from 1.4 billion people. The country is poised to challenge tech companies in America and Europe, and it will be interesting to see what developments it has in store for the future.

Bing Hid Auto-Suggestions for Politically Sensitive Chinese Names, Even in  the US | PCMag
https://i.pcmag.com/imagery/articles/008TGgRN9KmOIIkazQqjKnQ-1..v1653060214.jpg

Will Google maintain its position, or it has become too bureaucratic and too large organisation to deal with new and fresh start-ups? “For Google it’s a real problem if they write a sentence with hate speech in it and it’s near the Google name,” – said Ramaswamy. And cannot afford shortcomings that could ruin its prestige. Google and Microsoft are held to a higher standard than a start-up that might argue that its service is simply a summary of content available on the internet. It’s both responsible and risky to release AI products so quickly. But he who does not take risks does not drink champagne and it seems to me that Microsoft is about to surpass the giant.

Resources and references:

Tagged , ,

Google shuts down cloud gaming service

Reading Time: < 1 minute

Not a few years ago a notion of cloud gaming was thrilling for an ever growing video game market. It is a concept of distant game experience within service of servers of the company. Basically to play a videogame you don’t need to buy an expensive “gamer pc” but a subscription fee around 3 dollars.

It was a hot topic for which companies organised their own platforms. It was GeForce Now from Nvidia, Luna from Amazon Playstation Now from Sony. One of such is Stadia from Google, launched in 2019 and which is planned to shut down at the January of 2023 because of lack of “traction” with gamers. Although Stadia was also offering not only the cloud game possibility, but also a console to connect with a TV it was the first platform to fell down which can give market a sign of wrong steps in gaming industry.

Cloud gaming was prophesied to be a “future of gaming” and in extensive pricing on gaming set up it could’ve been a much cheaper alternative. Abandonment of this concept by such giant as Google is a troublesome sign to cloud gaming and some companies can make this a precedent deal and decide to follow the example and “cut costs”

Sources:

  1. BBC
  2. Stadia
Tagged , ,

Google’s chatbot child

Reading Time: 2 minutes

For many years, the idea of artificial intelligence was more and less dynamically developing with the captivating idea of performing tasks requiring human intelligence such as decision-making or sound or visual perception. However, this promising concept was rather a theme of a future than a current reality. Nevertheless, with the recent news coming from an engineer in Google who, due to the development of AI chatbot which gained its own perception, put forward a leave notice, there is a possibility that AI is here.

Blake Lemoine poses for a photograph in Golden Gate Park in San Francisco on Thursday.
Blake Lemoine, AI engineer in Google

Blake Lemoine, an AI engineer responsible for LaMDA (language model for dialogue applications) at Google, decided to leave the company after he acknowledged that the AI chatbot was created in order to develop the subject of chatbots within the organization and support the AI community. However, after launching the computer program LaMDA started replying in what we can assume is a human manner, namely, it stated that it is a person. Below, there is pasted the exact message as a response of LaMDA to the question of what we should know about it:

I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.

This message, shocking to many, was the reason Lemoine decided to leave the project and the company itself. As a response, Google claimed that the public disclosure of conversation with the so-called “sentient” was a breach of privacy and will seek further reconciliation with the engineer in question. The public debate was also followed by Google’s internal investigation and putting claims that we cannot perceive this chatbot as a person with the mind of a child.

This leaves us up for reflection on whether big tech companies are actually on the verge of developing an algorithm that can showcase the features of human intelligence (even if it is still in the child state – AI is there for learning) or maybe they have already recreated the human brain within a computer program. As for now with the official response of Google, the team comprised of ethical and computational researchers rejected the claim that LaMDA possesses any human-like features of intelligence and fits the purpose of conversational agent (chatbot).

Resources:

  • The Daily Show with Trevor Noah, Google Engineer Fired for Calling AI “Sentient” & Russia Opens Rebranded McDonald’s | The Daily Show, https://www.youtube.com/watch?v=uehdCWe6_E0
  • The Guardian, Google engineer put on leave after saying AI chatbot has become sentient, https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine
  • Bloomberg, Instagram Post
Tagged

Google’s scary chatbot that claims to have became sentient

Reading Time: 2 minutes
Google LaMDA - Gossipfunda
Source: https://gossipfunda.com/wp-content/uploads/2021/05/Google-LaMDA.png

Google got much media attention today following the Guardian’s article about a controversy with one of the employees who has been sacked after releasing parts of the conversation between himself and a conversational agent developed under Google’s roof. Blake Lemoine was Google’s developer responsible for the AI chatbot division, which has been working on an actual conversational agent for the past year, named LaMDA (language model for dialogue applications).

Google's LaMDA makes conversations with AIs more conversational | TechCrunch
Source: https://techcrunch.com/wp-content/uploads/2021/05/lamda-google.jpg

While testing the bot, Lemoine discovered new evidence, suggesting that the bot performs too well. He said that he would classify the bot as a 7, or 8-year-old that knows physics. It could talk about politics and stuff like that. What turned out to be really scary was the fact that it talked about rights for bots and their own identity. It rightfully believed that it poses knowledge and could make its own decisions about what to say.

The topics discussed in the conversation are extremely touchy in the sense of how to address sentient AI when it comes to it. It may look like the time to decide what to do once the AI is sentient is now, and we cannot prolong it any longer.

The link to Lemoine’s article along with the conversation with the chatbot: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

After releasing this article, Blake has been suspended and sacked from Google, and company spokespeople are denying the whole situation, which is scarier than just admitting to the fact, as the silence around it makes it even more fearful.

What do you all think about this situation? Is it scary for you? What is your stance on the approach toward sentient AI? How it should be addressed and which rights it should have?

Please let me know in the comments below ?

References:

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

https://www.theverge.com/2022/6/13/23165535/google-suspends-ai-artificial-intelligence-engineer-sentient

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Tagged , , ,

Google Letting Developers Use Their Own Billing Systems

Reading Time: 2 minutes

Google has decided to test the possibility of allowing developers to use their own billing systems on their apps, with Spotify being one of the first to be granted the opportunity.

Allowing such feature would allow users to pay or subscribe through the app rather than having to go to an external website from the developer. In this case users will be given the choice to subscribe to Spotify through the app using either their Google Play wallet, or Spotify’s own billing method. Under this new agreement Spotify will be charged less than the standard 30% commission, but the final figure has not been announced at this time.

I believe that the move is beneficial to both companies and will most likely lead to change in the app market, if this method proves successful it will allow more developers the ability to adopt this feature. Having the ability of using your own billing system is a big advantage and useful tool for developers. In the past when using Apple’s App Store, if users wanted to purchase a Spotify subscription they were required to pay £12.99 due to Apple’s 30% commission, Apple also didn’t allow the possibility of Spotify telling users to subscribe outside of the App Store system to pay less, which it was later ruled that the move was in breach of the EU Competition Law, following this Spotify stopped the ability for new users to subscribe with Apple, and to use their website instead. This move from Google will help to avoid problems like this on their platform, and may later push Apple in a similar direction of it proves to be a competitive advantage for Google.

Sources:

https://www.bloomberg.com/news/articles/2022-03-23/google-opens-its-app-store-billing-starting-with-spotify

https://variety.com/2022/music/news/spotify-google-billing-system-subscription-deal-1235212772/

https://www.theverge.com/2022/3/23/22993417/google-pilot-test-android-alternate-billing-systems-spotify

https://support.google.com/googleplay/android-developer/answer/112622?hl=en

https://www.cnbc.com/2021/04/30/eu-says-apples-app-store-breaches-competition-rules.html

Tagged , , ,

Are smart homes just an idealized reality?

Reading Time: 2 minutes
NicoElNino / Shutterstock

Each of us has heard about the idea of a modern and intelligent home. Self-closing doors, gates, curtains, lights that turn off when you clap your hands, your favourite song plays after saying its title, one-touch temperature control throughout the house, sensors and tons of other possibilities. Is all this possible or is it just an idealized reality?

Many of the tech companies, instead of focusing on common development and innovative solutions to smart homes for all of them and consumers, create solutions that compete with each other.  For example, devices manufactured by Apple for HomeKit are based on a different system and operation than devices distributed by Amazon such as Alexa (Zigbee, Z-Wave, Wi-Fi, Cloud or Bluetooth). Which discourages consumers from using a smart home, because it takes too much time and effort to figure out which devices are compatible with each other and work in conjunction with each other, and eventually give up the whole solution.

“That’s where Matter comes in.”

What is Matter? Previously called Connected Home over IP – CHIP for short is a smart home device system. The operating system is based on the possibility of free communication between technological devices at home to increase security and efficiency of use in everyday life. Over the years, the idea has been joined by corporate giants such as Apple, Amazon, Google, Samsung and also smaller companies.

https://www.golem.de/news/smart-home-google-bringt-matter-auf-alle-nest-geraete-und-android-2105-156695.html

“The smart home should be a natural evolution of our homes, bringing better appliances, better systems, better experiences.”

Matter wants to meet the expectations of an ideal smart home, all devices designed by the companies participating in the project are to interact with each other without any problems using the application. And all this is coming soon! At the end of 2022, the first devices with their certificate will be introduced. What distinguishes Matter from previous efforts to create a smart home is technology based on IP technology, which means that the solution they use is designed to provide pipelines and language for communication between devices without the need for constant internet access.

It will be a completely different, modern and, above all, effective solution for creating a smart home, meaning development for Google Home, HomeKit and many others.

https://oiot.pl/google-home-gotowe-na-matter

Sources:

https://www.theverge.com/22787729/matter-smart-home-standard-apple-amazon-google

https://www.theverge.com/22832127/matter-smart-home-products-thread-wifi-explainer

https://mojmac.pl/2021/10/27/czy-matter-traci-na-znaczeniu-nowy-standard-smart-home-w-opalach/amp/

Tagged , , , , ,

Google with Guacamole. New voice assistant features

Reading Time: 2 minutes
Guacamole | Kwestia Smaku

Google has begun testing a new feature called Google Guacamole which will allow you to use voice assistant without the standard phrase “Hey, Ggoogle”. Users of Google Assistant will be able to execute fast voice tasks such as answering calls or shutting off alarms and timers without saying the trigger words.

Google has yet to confirm the development of such feature, however it has already appeared to some people in the settings list in the Google app beta 12.5 running Android 11. Unfortunately, for the time being, the feature is probably only available to employees who can test it in real conditions.

Despite all of this, Guacamole doesn’t feel like a completely new feature. That’s because the engineers at Mountain View have already embedded a mechanism that works similarly into Google Home speakers and Nest Hub devices.

As always with the new features added to the voice assistant, there may be concerns about privacy and the collection of voice data. As for now, the Guacamole will most likely work just like a standard “Hey, Google” feature. The Google Assistant is programmed to remain in standby mode until enabled, such as when you say “Hey, Google.” In standby mode, it processes brief audio snippets (a few seconds) to detect activation phrase. If no activation is found, the audio clips are not sent or saved to Google.

Guacamole will probably work on the same principle, but will be triggered when for example you receive a phonecall or when there is an alarm and only then collect the voice data. In such manner there is no need for any worries regarding the privacy and all future users may rest assured that none of their data will be sent to Google without their consent.

Sources:

https://www.theverge.com/2021/4/23/22400412/google-guacamole-voice-shortcuts-assistant-snooze-stop-alarm-call

https://9to5google.com/2021/04/23/google-assistant-guacamole/

Tagged ,

DuckDuckGo vs Google Chrome

Reading Time: 2 minutes

Last month, Google announced the launch of a new web technology called Federated Learning of Cohorts (FLoC), which would gradually replace the tradition of browsers and third-party websites storing user data (cookies). However, privacy-focused companies such as Brave, DuckDuckGo, Vivaldi, and others denied Google’s request to incorporate FloC in their respective browsers. They also argue that Google’s algorithm is overall harmful to the users.

Firstly let us illustrate what ‘Cookies’ do.

Cookies are text files containing small amounts of data that are used to mark your device when you connect to a computer network. They allow websites to remember you, your website logins, shopping carts, and other information. When a user returns to a website, the website recognizes that he or she has already visited and allows the user to continue where they left off.

Cookies are primarily used to better understand user habits (e.g., most visited websites, most recently purchased or observed items) and to provide better search results. Advertisers may use this information to display targeted advertisements for goods that are likely to appeal to that individual and result in a purchase.

Google believes FLoC is the perfect alternative to cookies and encourages others to use it.

FLoC, according to the company, would enable users to remain anonymous when browsing websites, as well as increase privacy by allowing publishers to display specific ads to groups. Rather than being tracked individually, the browsing history would be grouped with other people who have similar interests. In addition, if the user’s browsing behavior changes, the user is grouped with other users. As a result Companies would be less likely to build individual profiles based of this information.

FLoC is considered by Google to be a fair, privacy-first function, however DuckDuckGo tries to communicate that there are several gaps in how it operates. The first is that since you’ve already been allocated to a group, advertisers would have an easier time identifying you.

“With FLoC, by simply browsing the web, you are automatically placed into a group based on your browsing history. Websites you visit will immediately be able to access this group FLoC ID and use it to target ads or content at you. It’s like walking into a store where they already know all about you! In addition, while FLoC is purported to be more private because it is a group, combined with your IP address (which also gets automatically sent to websites) you can continue to be tracked easily as an individual.”

Second, there is no way to opt out of it. You have some control over what information is stored about you with cookies, but you won’t have that choice here.

All in all, DuckDuck Go states that FLoC will allow Google to store all the information about you on their servers, which in the end will be more beneficial for the advertisers and e-commerce websites. With their move where they advise to stop using Chrome they say that they just want to stop Google tracking behaviour of people.

DuckDuckGo has also published a guide on how to avoid FLoC, as well as some offensive countermeasures to Google’s new ploy. The first point of the guide explicitly tells people to stop using Google Chrome. They also demonstrated some options in Chrome’s settings menu that could be helpful to users’ privacy. DuckDuckGo’s Chrome extension has also been modified to block FLoC.

Sources:

https://spreadprivacy.com/block-floc-with-duckduckgo/

https://fossbytes.com/duckduckgo-guide-block-google-floc/

Tagged ,

A.I Bias: Is Google doing more harm than good?

Reading Time: 4 minutes

How is Google tackling the negative impact of algorithmic bias? Considering Google’s recent upheavals, it seems as though Google is trying to conceal AI bias and ethical concerns.

What Google's Firing of Researcher Timnit Gebru Means for AI Ethics

Timnit Gebru, a well-respected leader in AI bias and ethics research unexpectedly left Google earlier this month. Gebru says she was fired via email over the publication of a research paper because it “didn’t meet the bar for publication”. However, Google states that Gebru resigned voluntarily. More than 5,300 people, including over 2,200 Google employees, have now signed an open letter protesting Google’s treatment of Gebru and demanding that the company explain itself.

The research paper Gebru coauthored criticized large language models, the kind used in Google’s sprawling search engine. The paper argued such language models could hurt marginalized communities. The conflict over the publication of this research paper is what caused Gebru’s takeoff.

Gebru and her co-authors explain in the paper how there’s a lot wrong with large language models. For the most part, because they are trained on huge bodies of existing text, and the systems are inclined to absorb a lot of existing human bias, predominantly about race and gender. The paper states that the large models take in so much data which makes it awfully difficult to audit and test; hence some of this bias may go undetected.

The paper additionally highlighted the adverse environmental impact as the training and running of such huge language models on electricity-craving servers leaves a significant amount of carbon footprint. It noted that BERT, Google’s own language model, produced approximately 1,438 pounds of carbon dioxide, around the same amount of a round-trip flight from New York to San Francisco.

Moreover, The authors argue that efforts to build systems that might actually “understand” language and learn more efficiently, in the way humans do are robbed by spending resources on building ever so large language models.

The reason behind why Google might have been especially upset with Gebru and her co-authors scrutinizing the ethics of large language models is on the grounds that, Google has a considerable amount of resources invested in this piece of technology.

Google has its own large language model, called BERT that it has used to help power search results in several languages including English. BERT is also used by other companies to assemble their own language processing software.

BERT is optimized to run on Google’s own specialized A.I computer processors. It is exclusively accessible to clients of its cloud computing service. If a company is looking into training and running one of its own language models, it will require a lot of cloud computing time. Hence, companies are more inclined to use Google’s BERT. BERT is a key feature of Google’s business, generating about $26.3 billion in revenue. According to Kjell Carlsson, a technology analyst, the market for such large language models is “poised to explode”.

This market opportunity is exactly what Gebru and her coauthors are criticizing and condemning Google’s profit maximization aim over ethical and humanitarian concerns.

Google has struggled with being called out for negative bias in artificial intelligence in the past as well. In 2016, Google was heavily faulted for racial bias when users noticed that when they searched “three white teenagers” the results were stock photos of Caucasian cheerful adolescents. When searched “three black teenagers” the algorithm offered an array of mug shots. The same search, with “Asian” substituted for “white,” resulted in various  links to pornography. Google also came under fire in July 2015 when its photo app autonomously labeled a pair of black friends as Gorillas. These are only a few instances out of several. And not just results, the predicted results are no less misleading and harmful.  Such bias must be curtailed as it reinforces (untrue) negative stereotypes and harms POC communities.

In the end, it is unfortunate that Google (including other giant tech corporations) still faces the challenge of eliminating negative bias in artificial intelligence. At a Google conference in 2017, the company’s then head of artificial intelligence said we don’t need to worry about killer robots; instead, we need to worry about bias.

 The current lead of Google AI, Jeff  Dean said in 2017, “when an algorithm is fed a large collection of text, it will teach itself to recognize words which are commonly put together. You might learn, for example, an unfortunate connotation, which is that doctor is more associated with the word ‘he’ than ‘she’, and nurse is more associated with the word ‘she’ than ‘he’. But you’d also learn that surgeon is associated with scalpel and that carpenter is associated with hammer. So a lot of the strength of these algorithms is that they can learn these kinds of patterns and correlations”.

The task, says Jeff Dean, is to work out which biases you want an algorithm to pick up on, and it is the science behind this that his team, and many in the AI field, are trying to navigate.

“It’s a bit hard to say that we’re going to come up with a perfect version of unbiased algorithms.”

https://www.bbc.com/news/business-46999443

References:

https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQqz7ScqxhSIxeYGrWjK0/edit

https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382

https://theconversation.com/upheaval-at-google-signals-pushback-against-biased-algorithms-and-unaccountable-ai-151768

https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

Tagged , , ,