Tag Archives: google

Google’s chatbot child

Reading Time: 2 minutes

For many years, the idea of artificial intelligence was more and less dynamically developing with the captivating idea of performing tasks requiring human intelligence such as decision-making or sound or visual perception. However, this promising concept was rather a theme of a future than a current reality. Nevertheless, with the recent news coming from an engineer in Google who, due to the development of AI chatbot which gained its own perception, put forward a leave notice, there is a possibility that AI is here.

Blake Lemoine poses for a photograph in Golden Gate Park in San Francisco on Thursday.
Blake Lemoine, AI engineer in Google

Blake Lemoine, an AI engineer responsible for LaMDA (language model for dialogue applications) at Google, decided to leave the company after he acknowledged that the AI chatbot was created in order to develop the subject of chatbots within the organization and support the AI community. However, after launching the computer program LaMDA started replying in what we can assume is a human manner, namely, it stated that it is a person. Below, there is pasted the exact message as a response of LaMDA to the question of what we should know about it:

I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.

This message, shocking to many, was the reason Lemoine decided to leave the project and the company itself. As a response, Google claimed that the public disclosure of conversation with the so-called “sentient” was a breach of privacy and will seek further reconciliation with the engineer in question. The public debate was also followed by Google’s internal investigation and putting claims that we cannot perceive this chatbot as a person with the mind of a child.

This leaves us up for reflection on whether big tech companies are actually on the verge of developing an algorithm that can showcase the features of human intelligence (even if it is still in the child state – AI is there for learning) or maybe they have already recreated the human brain within a computer program. As for now with the official response of Google, the team comprised of ethical and computational researchers rejected the claim that LaMDA possesses any human-like features of intelligence and fits the purpose of conversational agent (chatbot).


  • The Daily Show with Trevor Noah, Google Engineer Fired for Calling AI “Sentient” & Russia Opens Rebranded McDonald’s | The Daily Show, https://www.youtube.com/watch?v=uehdCWe6_E0
  • The Guardian, Google engineer put on leave after saying AI chatbot has become sentient, https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine
  • Bloomberg, Instagram Post

Google’s scary chatbot that claims to have became sentient

Reading Time: 2 minutes
Google LaMDA - Gossipfunda
Source: https://gossipfunda.com/wp-content/uploads/2021/05/Google-LaMDA.png

Google got much media attention today following the Guardian’s article about a controversy with one of the employees who has been sacked after releasing parts of the conversation between himself and a conversational agent developed under Google’s roof. Blake Lemoine was Google’s developer responsible for the AI chatbot division, which has been working on an actual conversational agent for the past year, named LaMDA (language model for dialogue applications).

Google's LaMDA makes conversations with AIs more conversational | TechCrunch
Source: https://techcrunch.com/wp-content/uploads/2021/05/lamda-google.jpg

While testing the bot, Lemoine discovered new evidence, suggesting that the bot performs too well. He said that he would classify the bot as a 7, or 8-year-old that knows physics. It could talk about politics and stuff like that. What turned out to be really scary was the fact that it talked about rights for bots and their own identity. It rightfully believed that it poses knowledge and could make its own decisions about what to say.

The topics discussed in the conversation are extremely touchy in the sense of how to address sentient AI when it comes to it. It may look like the time to decide what to do once the AI is sentient is now, and we cannot prolong it any longer.

The link to Lemoine’s article along with the conversation with the chatbot: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

After releasing this article, Blake has been suspended and sacked from Google, and company spokespeople are denying the whole situation, which is scarier than just admitting to the fact, as the silence around it makes it even more fearful.

What do you all think about this situation? Is it scary for you? What is your stance on the approach toward sentient AI? How it should be addressed and which rights it should have?

Please let me know in the comments below 👇





Tagged , , ,

Google Letting Developers Use Their Own Billing Systems

Reading Time: 2 minutes

Google has decided to test the possibility of allowing developers to use their own billing systems on their apps, with Spotify being one of the first to be granted the opportunity.

Allowing such feature would allow users to pay or subscribe through the app rather than having to go to an external website from the developer. In this case users will be given the choice to subscribe to Spotify through the app using either their Google Play wallet, or Spotify’s own billing method. Under this new agreement Spotify will be charged less than the standard 30% commission, but the final figure has not been announced at this time.

I believe that the move is beneficial to both companies and will most likely lead to change in the app market, if this method proves successful it will allow more developers the ability to adopt this feature. Having the ability of using your own billing system is a big advantage and useful tool for developers. In the past when using Apple’s App Store, if users wanted to purchase a Spotify subscription they were required to pay £12.99 due to Apple’s 30% commission, Apple also didn’t allow the possibility of Spotify telling users to subscribe outside of the App Store system to pay less, which it was later ruled that the move was in breach of the EU Competition Law, following this Spotify stopped the ability for new users to subscribe with Apple, and to use their website instead. This move from Google will help to avoid problems like this on their platform, and may later push Apple in a similar direction of it proves to be a competitive advantage for Google.







Tagged , , ,

Are smart homes just an idealized reality?

Reading Time: 2 minutes
NicoElNino / Shutterstock

Each of us has heard about the idea of a modern and intelligent home. Self-closing doors, gates, curtains, lights that turn off when you clap your hands, your favourite song plays after saying its title, one-touch temperature control throughout the house, sensors and tons of other possibilities. Is all this possible or is it just an idealized reality?

Many of the tech companies, instead of focusing on common development and innovative solutions to smart homes for all of them and consumers, create solutions that compete with each other.  For example, devices manufactured by Apple for HomeKit are based on a different system and operation than devices distributed by Amazon such as Alexa (Zigbee, Z-Wave, Wi-Fi, Cloud or Bluetooth). Which discourages consumers from using a smart home, because it takes too much time and effort to figure out which devices are compatible with each other and work in conjunction with each other, and eventually give up the whole solution.

“That’s where Matter comes in.”

What is Matter? Previously called Connected Home over IP – CHIP for short is a smart home device system. The operating system is based on the possibility of free communication between technological devices at home to increase security and efficiency of use in everyday life. Over the years, the idea has been joined by corporate giants such as Apple, Amazon, Google, Samsung and also smaller companies.


“The smart home should be a natural evolution of our homes, bringing better appliances, better systems, better experiences.”

Matter wants to meet the expectations of an ideal smart home, all devices designed by the companies participating in the project are to interact with each other without any problems using the application. And all this is coming soon! At the end of 2022, the first devices with their certificate will be introduced. What distinguishes Matter from previous efforts to create a smart home is technology based on IP technology, which means that the solution they use is designed to provide pipelines and language for communication between devices without the need for constant internet access.

It will be a completely different, modern and, above all, effective solution for creating a smart home, meaning development for Google Home, HomeKit and many others.






Tagged , , , , ,

Google with Guacamole. New voice assistant features

Reading Time: 2 minutes
Guacamole | Kwestia Smaku

Google has begun testing a new feature called Google Guacamole which will allow you to use voice assistant without the standard phrase “Hey, Ggoogle”. Users of Google Assistant will be able to execute fast voice tasks such as answering calls or shutting off alarms and timers without saying the trigger words.

Google has yet to confirm the development of such feature, however it has already appeared to some people in the settings list in the Google app beta 12.5 running Android 11. Unfortunately, for the time being, the feature is probably only available to employees who can test it in real conditions.

Despite all of this, Guacamole doesn’t feel like a completely new feature. That’s because the engineers at Mountain View have already embedded a mechanism that works similarly into Google Home speakers and Nest Hub devices.

As always with the new features added to the voice assistant, there may be concerns about privacy and the collection of voice data. As for now, the Guacamole will most likely work just like a standard “Hey, Google” feature. The Google Assistant is programmed to remain in standby mode until enabled, such as when you say “Hey, Google.” In standby mode, it processes brief audio snippets (a few seconds) to detect activation phrase. If no activation is found, the audio clips are not sent or saved to Google.

Guacamole will probably work on the same principle, but will be triggered when for example you receive a phonecall or when there is an alarm and only then collect the voice data. In such manner there is no need for any worries regarding the privacy and all future users may rest assured that none of their data will be sent to Google without their consent.




Tagged ,

DuckDuckGo vs Google Chrome

Reading Time: 2 minutes

Last month, Google announced the launch of a new web technology called Federated Learning of Cohorts (FLoC), which would gradually replace the tradition of browsers and third-party websites storing user data (cookies). However, privacy-focused companies such as Brave, DuckDuckGo, Vivaldi, and others denied Google’s request to incorporate FloC in their respective browsers. They also argue that Google’s algorithm is overall harmful to the users.

Firstly let us illustrate what ‘Cookies’ do.

Cookies are text files containing small amounts of data that are used to mark your device when you connect to a computer network. They allow websites to remember you, your website logins, shopping carts, and other information. When a user returns to a website, the website recognizes that he or she has already visited and allows the user to continue where they left off.

Cookies are primarily used to better understand user habits (e.g., most visited websites, most recently purchased or observed items) and to provide better search results. Advertisers may use this information to display targeted advertisements for goods that are likely to appeal to that individual and result in a purchase.

Google believes FLoC is the perfect alternative to cookies and encourages others to use it.

FLoC, according to the company, would enable users to remain anonymous when browsing websites, as well as increase privacy by allowing publishers to display specific ads to groups. Rather than being tracked individually, the browsing history would be grouped with other people who have similar interests. In addition, if the user’s browsing behavior changes, the user is grouped with other users. As a result Companies would be less likely to build individual profiles based of this information.

FLoC is considered by Google to be a fair, privacy-first function, however DuckDuckGo tries to communicate that there are several gaps in how it operates. The first is that since you’ve already been allocated to a group, advertisers would have an easier time identifying you.

“With FLoC, by simply browsing the web, you are automatically placed into a group based on your browsing history. Websites you visit will immediately be able to access this group FLoC ID and use it to target ads or content at you. It’s like walking into a store where they already know all about you! In addition, while FLoC is purported to be more private because it is a group, combined with your IP address (which also gets automatically sent to websites) you can continue to be tracked easily as an individual.”

Second, there is no way to opt out of it. You have some control over what information is stored about you with cookies, but you won’t have that choice here.

All in all, DuckDuck Go states that FLoC will allow Google to store all the information about you on their servers, which in the end will be more beneficial for the advertisers and e-commerce websites. With their move where they advise to stop using Chrome they say that they just want to stop Google tracking behaviour of people.

DuckDuckGo has also published a guide on how to avoid FLoC, as well as some offensive countermeasures to Google’s new ploy. The first point of the guide explicitly tells people to stop using Google Chrome. They also demonstrated some options in Chrome’s settings menu that could be helpful to users’ privacy. DuckDuckGo’s Chrome extension has also been modified to block FLoC.




Tagged ,

A.I Bias: Is Google doing more harm than good?

Reading Time: 4 minutes

How is Google tackling the negative impact of algorithmic bias? Considering Google’s recent upheavals, it seems as though Google is trying to conceal AI bias and ethical concerns.

What Google's Firing of Researcher Timnit Gebru Means for AI Ethics

Timnit Gebru, a well-respected leader in AI bias and ethics research unexpectedly left Google earlier this month. Gebru says she was fired via email over the publication of a research paper because it “didn’t meet the bar for publication”. However, Google states that Gebru resigned voluntarily. More than 5,300 people, including over 2,200 Google employees, have now signed an open letter protesting Google’s treatment of Gebru and demanding that the company explain itself.

The research paper Gebru coauthored criticized large language models, the kind used in Google’s sprawling search engine. The paper argued such language models could hurt marginalized communities. The conflict over the publication of this research paper is what caused Gebru’s takeoff.

Gebru and her co-authors explain in the paper how there’s a lot wrong with large language models. For the most part, because they are trained on huge bodies of existing text, and the systems are inclined to absorb a lot of existing human bias, predominantly about race and gender. The paper states that the large models take in so much data which makes it awfully difficult to audit and test; hence some of this bias may go undetected.

The paper additionally highlighted the adverse environmental impact as the training and running of such huge language models on electricity-craving servers leaves a significant amount of carbon footprint. It noted that BERT, Google’s own language model, produced approximately 1,438 pounds of carbon dioxide, around the same amount of a round-trip flight from New York to San Francisco.

Moreover, The authors argue that efforts to build systems that might actually “understand” language and learn more efficiently, in the way humans do are robbed by spending resources on building ever so large language models.

The reason behind why Google might have been especially upset with Gebru and her co-authors scrutinizing the ethics of large language models is on the grounds that, Google has a considerable amount of resources invested in this piece of technology.

Google has its own large language model, called BERT that it has used to help power search results in several languages including English. BERT is also used by other companies to assemble their own language processing software.

BERT is optimized to run on Google’s own specialized A.I computer processors. It is exclusively accessible to clients of its cloud computing service. If a company is looking into training and running one of its own language models, it will require a lot of cloud computing time. Hence, companies are more inclined to use Google’s BERT. BERT is a key feature of Google’s business, generating about $26.3 billion in revenue. According to Kjell Carlsson, a technology analyst, the market for such large language models is “poised to explode”.

This market opportunity is exactly what Gebru and her coauthors are criticizing and condemning Google’s profit maximization aim over ethical and humanitarian concerns.

Google has struggled with being called out for negative bias in artificial intelligence in the past as well. In 2016, Google was heavily faulted for racial bias when users noticed that when they searched “three white teenagers” the results were stock photos of Caucasian cheerful adolescents. When searched “three black teenagers” the algorithm offered an array of mug shots. The same search, with “Asian” substituted for “white,” resulted in various  links to pornography. Google also came under fire in July 2015 when its photo app autonomously labeled a pair of black friends as Gorillas. These are only a few instances out of several. And not just results, the predicted results are no less misleading and harmful.  Such bias must be curtailed as it reinforces (untrue) negative stereotypes and harms POC communities.

In the end, it is unfortunate that Google (including other giant tech corporations) still faces the challenge of eliminating negative bias in artificial intelligence. At a Google conference in 2017, the company’s then head of artificial intelligence said we don’t need to worry about killer robots; instead, we need to worry about bias.

 The current lead of Google AI, Jeff  Dean said in 2017, “when an algorithm is fed a large collection of text, it will teach itself to recognize words which are commonly put together. You might learn, for example, an unfortunate connotation, which is that doctor is more associated with the word ‘he’ than ‘she’, and nurse is more associated with the word ‘she’ than ‘he’. But you’d also learn that surgeon is associated with scalpel and that carpenter is associated with hammer. So a lot of the strength of these algorithms is that they can learn these kinds of patterns and correlations”.

The task, says Jeff Dean, is to work out which biases you want an algorithm to pick up on, and it is the science behind this that his team, and many in the AI field, are trying to navigate.

“It’s a bit hard to say that we’re going to come up with a perfect version of unbiased algorithms.”







Tagged , , ,

Google stores over 50 milion Americans’ medical records – Project Nightingale

Reading Time: 2 minutes

We all know, that leakage of personal data isn’t anything new nowadays. There are many reasons for collecting your data by big companies. Many of them focus on earning money, by targeting ads based on data concerning your behavior and interests. In Project Nigtigale’s case, however, Google aims at something much bigger.

Project Nightingale owes the name after Florence Nightingale who was an English social reformer and the founder of modern nursing in the 19th century. Project Nightingale is a data storage and processing project made by Google Cloud and Ascencion, which set on foot in early 2019. Their main goal is to design new software using artificial intelligence to predict or more quickly identify medical conditions and suggest changes to patients care. Additionally, the company aims to create a search tool that collects patients’ data into a central location. Google is said to be using medical records of more than 50 million American people from 2600 hospitals in 21 states. The shared data includes patient names and their dates of birth, along with doctor diagnoses, lab records, and hospitalization results. Health data was stored on an Ascension-owned virtual private space.

Despite its great purpose, patients and physicians across 21 states haven’t been informed about their data sharing. Because of this, there were many speculations about whether is this project morally equitable. David Feinberg – the head of Google Health – responded to all criticism.  Due to his position – a physician – he said, that he understands that health information should be private and as he refers – Google is not permitted to use that for marketing or research purposes. However, we can’t be sure that the personal pieces of information were strictly sheltered. People are afraid of the potential break-in by hackers.

The Office of Civil Rights of HHS is demanding more details about Project Nightingale to ensure HIPPA (Health Insurance Portability and Accountability Act) protections have been implemented.






Would you give up your personal data for the development of medical care in the future?

Share with us your opinion!



Author: Maciej Dziurdzia










Tagged , , ,

Quantum power

Reading Time: 2 minutes

We all heard about supercomputers that are bigger than some houses and make enormous calculations. Also term of artificial intelligence or machine learning is getting popular. Slowly we are living in a world driven by data and all actions are made by programs.

In order to make more advance operations and use more sophisticated technology we need something different than another bigger computer. Than comes quantum computer

What is a quantum computer?

Our computers work in regular way. They can store information as 1 or 0 states called bits. Simply it means that when we have more data we use more bits and so on. Quantum computer works differently. It stores memory in a superposition of classical states and the basic unit of it is called qubit. They are in an indeterminate state between 0 and 1. It allows to make calculations much faster and efficient, but it’s not the biggest advantage of a quantum computing. Every qubit is connected in some way to other qubits and effect on each other within one computer. It means that there is a much shorter way for information to get through, than in classical computers. It enables to extremely shorten the time needed for all sort of difficult operations. However due to it’s a different state, qubits can sometimes fail and have errors, so they need to be constantly checking each other. It is said that a quantum computer can make calculations that would take 10 thousand years of today’s most powerful computer to solve, in just a few minutes. It opens a way for science and technology to work in completely new fields with a new great power to use. It will be crucial in fields like space travel, machine learning or data science. On the other hand, it is unlikely to be used in every household instead of classic computer.

The first one

Just a few weeks ago it seems that we have a breakthrough point for this industry. Google claimed to create a first fully working quantum computer in the world. The processor that operates this computer is called Sycamore. It works with 53 qubits. When it would be expanded to 70 qubits the biggest computer in the world would need to be the size of a city to reach it’s power. IBM is also working on it’s own project with quantum computing, but it is the Google which is the first. We don’t know yet how to use it for something particular, but we it is powerful.



Quantum computing’s ‘Hello World’ moment





Tagged , , , ,

Data is a new oil?

Reading Time: 2 minutes

Until recently, companies managed just traditional assets such as machines, money and intellectual property. The digital era brought a new type of asset – data. This is the raw material from which forecasts, insights and very big money currently are made.

Big data is becoming the main driver of growth for companies and a new resource for the economy. Companies collect data on customer behavior and equipment operation along the way creating new services based on received information.

The only problem is that people usually are not aware of what data is collected from them which creates many legal disputes on whether companies allowed or not to “spy on their clients”. With adoption of data protection rules in many countries all around the world, tech giants such as Facebook, Google or Amazon are facing a real threat for their businesses.

The common phrase “data is new oil” has become dangerous for companies whose business depends on third-party data. In my opinion, comparison is not completely wrong because who controls the data controls the entire market. But for tech giants, comparison with oil barons can result in image deterioration, luck of trust and loss of customers.

Because of that Google’s chief financial officer, Ruth Porat, speaking at the World Economic Forum in Davos, tried to popularize a more upbeat way of describing data: “data is more like sunlight than oil,” adding, “It is like sunshine — we keep using it, and it keeps regenerating.”

It is clear that if Google is considered to be an environmentally friendly solar power station, and not a vertically integrated oil company, then so many questions are immediately removed. I do not think everything will work out right away, but the attempt is worthy: the desire to compare technology companies with oil bars is also exaggerated and very incomplete analogy. Maybe it will end up finding an adequate middle.



Tagged , ,