Author Archives: 47466

Netflix expands its campaign against password sharing to more nations. Is it going to save the streaming platform leader?

Reading Time: 2 minutes

There is a new policy for Netflix users. Users have to set a ‘primary location’ where every member of the household using the network in that area can log in to the same Netflix account. Users will have to pay more if they want to add profiles of people who do not live in that particular house and are referred to as ‘extra members’. The policy was first introduced in Chile, Costa Rica, Peru, Argentina, Dominican Republic, El Salvador, Guatemala and Honduras. Canada, New Zealand, Portugal and Spain are the additional four nations where Netflix is imposing restrictions on password sharing. The price to add a new person will be $5.96 in Canada, $5.09 in New Zealand, and in Portugal and Spain, respectively, $4.30 and $6.45.

Netflix estimates one hundred million people around the world use shared accounts. Netflix’s claimed that its ability to invest in new content was limited by the loss of revenue from shared accounts. It reported that it intends to expand the new strategy to more nations in the coming months.

In 2022, Netflix saw its subscriber numbers fall sharply due to concerns about subscription streaming fatigue and increased competition from Disney or Apple.

In my opinion, it is difficult to predict whether Netflix’s new policy will improve its market situation. Firstly, there are plenty of options for consumers to choose from in the streaming platform industry, so they do not have to choose Netflix. Furthermore, there is a lack of content for some users. Netflix is removing many films and cancelling many series due to lower profits. In addition, introducing the new policy will make a lot of consumers dissatisfied. Many of them are friends who share the account and do not live together, so they would have to pay much more from now on, and that may prompt them to cancel their subscriptions. I have already read plenty of Netflix consumers commenting that when the day comes for the new policy to be introduced in their country, they will immediately cancel their subscriptions. On the other hand, the new policy will make people pay more, and therefore Netflix can create more content, which many people do not find satisfying at the moment.

Thanks for your time. Feel free to comment below 🙂

References:

https://edition.cnn.com/2023/02/09/media/netflix-password-sharing-crackdown/index.html

https://www.polygon.com/23591400/netflix-password-sharing-policy-rules

Artificial Intelligence detects people’s emotions.

Reading Time: 3 minutes

By 2023, one of the most popular uses of machine learning will be emotional AI, a technology that can recognise and respond to human emotions. For example, former Google researcher, Alan Cowen, launched Hume AI, which is creating tools to detect emotions through vocal, facial, and linguistic expressions. Another company which works with emotional AI is Smart Eyes, which recently bought Affectiva, which created the SoundNet neural network. It is an algorithm that analyses emotions such as anger from audio samples in less than 1.2 seconds. Even the video platform Zoom is introducing Zoom IQ. It is a new function that will soon give customers real-time measurement of emotions and engagement during a virtual conference.

Tech businesses will introduce sophisticated chatbots in 2023 that can accurately replicate human emotions to build stronger relationships with customers in the banking, education, and healthcare industries. Microsoft’s chatbot Xiaoice is already successful in China, with users reportedly having conversations with “her” more than 60 times each month. In addition, it passed the Turing test, as users did not realise for 10 minutes that it was not a human. According to a Juniper Research Consultancy analysis, there will be 2.8 billion chatbot interactions annually by 2023.

Emotional AI will also be widely used in schools by 2023. Some secondary schools in Hong Kong already employ an artificial intelligence application made by Find Solutions AI, which analyses tiny facial muscle movements to identify positive and negative emotions. This technique allows teachers to monitor students’ emotional changes, motivation and concentration, allowing them to intervene early if a student starts to lose interest.

On the other hand, emotional AI is not perfect. It is based on algorithms and does not consider the social and cultural context of the person and the situation. Algorithms, for example, can detect and report crying, but it is not always possible to determine the cause and meaning of the sobbing. Similarly, a scowling face does not always indicate an angry person, but that is the conclusion an algorithm will likely reach. We all adapt to our society, so our expressions do not always accurately reflect how we feel inside. Furthermore, emotional AI is likely to exacerbate gender and racial inequalities. A 2019 UNESCO report, for example, showed the negative effects of the gendering of AI technology, with ‘female’ voice assistant systems created by ideals of emotional passivity and servility. The next thing I want to write is about racial inequality. An analysis of 400 NBA games using two well-known emotion recognition systems, Face and Microsoft’s Face API, found that black players tended to be assigned more negative feelings, even when they were smiling.

In my opinion, emotional AI is beneficial for us because it will not only free up the time of medical staff, by talking to patients and giving them the support they need, but students can also be more active in class by AI noticing who is not engaged during class. On the other hand, emotional AI has disadvantages. It is biased against certain groups and therefore discriminates against them. Moreover, some people do not want to be analysed. Furthermore, AI is not able to accurately anticipate emotions, nor is it able to be empathetic. It cannot replace people like psychologists.

Thanks for your time. Feel free to comment below 🙂

References:

https://www.wired.co.uk/article/empathy-artificial-intelligence

Are we going to see a race between Artificial Intelligence chatbots?

Reading Time: 2 minutes

Most of us have heard of or even used the chatbot ChatGPT. It has been available to the public for two months and has gained massive popularity. The programme will produce the response you were hoping for, no matter what subject it is, for example, a math question or even an essay for English. Therefore it is a threat to education. Students can now write their thesis or assignments for university via ChatGPT or its rivals. On the other hand, the knowledge that the programme takes is from the Internet and, as we know, this may not be true. Moreover, ChatGPT does not update the resources and stays as in 2021.

The designers of ChatGPT have a goal. It is the multi-billion dollar internet search industry. For this reason, it has been called the Google killer. In 2020, Google’s parent company Alphabet generated $104 billion in revenue from search alone. That is why Microsoft, owning the search engine Bing, has announced a collaboration with OpenAI.

Recently, Google announced the launch of Bard, an AI chatbot that will compete with ChatGPT. Bard is built on Google’s Lamda language learning algorithm. The tech giant also recently announced a $300 million investment in Anthropic, a company creating a ChatGPT rival.

Meta is also a company which owns an Artificial Intelligence chatbot called Blanderbot.

In my opinion, as for now, ChatGPT is the leader of chatbots, but who knows what the future will bring? The race between Artificial Intelligence chatbots may be beneficial in terms of improving and pushing the boundaries of Artificial Intelligence. Moreover, researchers can learn from one another.

Thanks for your time. Feel free to comment below 🙂

References:

https://www.bbc.com/news/technology-64538604

Is app Lensa AI good for us?

Reading Time: 2 minutes

People who have used social media for over a week probably have seen others who have posted modified pictures of themselves with fantastical or anime features.

Lensa AI is the app which allows us to create perfect avatars based on our photos.

Many users of the app, mostly women, have noticed that even though they uploaded modest photos, the app generated nudes and cartoonishly sexualized features, like big breasts. Another problem is that many women of color said the app whitened their skin.

To test the app, Olivia Snow decided to be an experimental rabbit. She uploaded her photos from her childhood. The AI recognized in some photos her child’s body and did not add breasts, but in other cases, the AI added breasts to her chest that were distinct from clothing. Another time she decided to upload her childhood photos and selfies. The results were shocking. There were fully nude photos of an adolescent and sometimes childlike face, but with a distinctly adult body. The AI generated her avatars with her childlike face holding a leaf between her naked adult breasts.

In my opinion, the app Lensa AI is dangerous to some extent because first of all pedophiles can use that app to create child pornography and then they can upload it to the Internet. Also, others can use that app against people that they do not like and, for example, upload it on a pornographic site. Furthermore, that app can cause more insecurity in people because the purpose of the app is to create the perfect photo or avatar, so people may think they are so beautiful in the pictures the AI generated, but in reality, they do not look the same and think of themselves as ugly.

Thanks for your time. Feel free to comment below 🙂

References:

https://www.wired.com/story/lensa-artificial-intelligence-csem/

Is it possible to diagnose Parkinson’s or COVID-19 based on the user’s voice?

Reading Time: 2 minutes

It is difficult to detect Parkinson’s disease or COVID-19 in the early stages of those illnesses. Therefore there is a solution to that problem. An app called Aum detects both of those diseases in the early stages.

Dinesh Kumar, a professor at the Royal Melbourne Institute of Technology, and his colleagues conducted an experiment to discover whether the subtleties in a person’s voice could be detected by machine-learning algorithms. Therefore they invited 36 people with Parkinson’s disease and 36 without it.

The participants had to say different phenomes, which required sounds from the throat (/a/), the mouth (/o/), and the nose (/m/). The researchers recorded that with the IOS system, and then they developed and applied the algorithms which could differentiate people with Parkinson’s disease and without it. In IEEE Access, the researchers reported that their algorithm could identify people with Parkinson’s disease with 100 per cent accuracy. Kumar also said that they could differentiate between people with Parkinson’s disease who take medication and who do not.

The researchers then applied a different machine-learning algorithm to the previous one. It turned out that the features extracted from the vowel /i/ during the first three days after admittance to the hospital were the most effective at differentiating between people with a COVID-19 infection of the lungs and healthy people. The algorithm was accurate at 94 per cent.

In my opinion, Aum is beneficial for us. With that app, we can detect in the early stages illnesses like Parkinson’s or COVID-19. Therefore we can prepare for the future by buying assistive devices for a person with Parkinson’s disease or adapting our house to that person. Moreover, those people can start taking medicines from the very beginning of their disease, which may help them to slow down the effects of that illness. When it comes to COVID-19, taking medication from the beginning of that illness can make them recover. However, if the app got a diagnosis wrong, it could end badly for that person. It could cause unnecessary expenses, stress or breakdowns leading to mental health problems. I think that we should not believe the app in 100 per cent, and the researchers should continue to develop it.

Thanks for your time. Feel free to comment below 🙂

References:

https://spectrum.ieee.org/parkinsons-disease-diagnosis

https://www.youtube.com/watch?v=stL3BSSgwp0

Is 3D printing a threat to safety?

Reading Time: 2 minutes

Most of us probably already have heard about 3D printers. Those devices manage to print most things that we want. Its popularity is increasing.

Lately, there were some reports from the UK that the police found printed guns in people’s houses. In the beginning, those 3D tools were unreliable. Currently, the 3D printed components form the parts needed to make a gun, at most 80 to 90% of the weapon. Other components have to be made from metal. One of the reasons why the popularity of 3D printed guns has increased is due to the coronavirus pandemic when there were difficulties with transporting weapons across the countries. There is a chance that in the future 3D printed guns will be ready to use because 3D printers will be able to print metal components.

In my opinion, 3D printing is a threat to safety. People who will not have access to a real gun will be able to print out one for themselves, and therefore it can result in more violence in the world. On the other hand, 3D printing can improve our lives, for instance, there is a possibility that in the future, we will be able to print organs, so that makes 3D printers important. I think that we can prevent the situation in which 3D printing would be a threat to safety. Companies which produce 3D printers can monitor what their consumers create through those devices. If they get a notification about printing a gun, they could block the usage of a printer and then call the police. Also, companies could make a contract, in which customers would have to write what the 3D printer would be used for, and then they would have to sign the paper.

Thanks for your time. Feel free to comment below 🙂

References:

https://www.bbc.com/news/technology-63495123

Bumble’s AI tool available to the wider tech community

Reading Time: 2 minutes

In 2019, dating app Bumble launched its AI tool called Private Detector. It alerts users when they have been sent a potential nude image and automatically blurs the photo. They have the choice of viewing, deleting, or reporting the image.

The AI tool was launched in response to cyberflashing, which is the act of sending nude photos through a phone. There is even a possibility of a new bill in the UK that will make cyberflashing a criminal offence. If it will be approved, social media companies that do not manage to remove nudity will face multimillion-euro fines.

Online harassment is a serious thing. There is research from 2020 finding out that 75.8% of girls between the ages of 12 and 18 have been sent unsolicited nude images of boys or men.

Anyone who wants to find Bumble’s tool has to go to GitHub. Private Detector has been released under the Apache License so that others can adapt it and make features of their own.

In my opinion, Bumble’s move to open its AI tool to the wider tech community is a beneficial thing. First of all, other social media companies might also use that tool to remove nudity from their apps, so as not to get fined. Secondly, children’s safety will be much improved. For instance, adolescents using social media will get lewd images only in a blurred format. Also, parents of a small child may like to use Bumble’s AI tool if, for example, an offspring accidentally clicks on ads inappropriate for their age, the child will not see the exact photo. Furthermore, people with traumatising memories regarding abuse, etc., would prefer not seeing nude photos, so Private Detector will help with it.

Thanks for your time. Feel free to comment below 🙂

References:

https://www.business-standard.com/article/technology/bumble-open-sources-ai-tool-private-detector-that-detects-nudes-122102500464_1.html

https://www.artificialintelligence-news.com/2022/10/25/bumble-open-sources-its-lewd-spotting-ai-tool/

https://www.breakingnews.ie/ireland/new-bill-will-see-cyberflashing-become-a-criminal-offence-1378842.html

How AI helped people after a natural disaster?

Reading Time: 2 minutes

I think that most of us have already heard about Hurricane Ian and Fiona. Both of them occurred in September 2022 in North America and Latin America. They did a lot of damage, especially to Cuba, the southeast United States, Canada, Puerto Rico, and the Dominican Republic.

To help people who suffered from hurricanes, the nonprofit GiveDirectly collaborated with Google. Thanks to the latter, GiveDirectly knew by provided images which areas were the most destroyed and needed financial support. A Google tool called Skai was used in that process. It analysed satellite imagery from before and after a disaster and estimated the severity of damage to buildings. After getting to know that information, people in need were given a one-time payment of a total of 700$ in order to buy the most necessary things like for example food.

In my opinion, this kind of activity of AI led to effectiveness. People who needed money did not have to queue for handouts in public because GiveDirectly already knew to whom they should have given money thanks to the help of Google. It made the process of the selection of whom to aid faster. On the other hand, this kind of solution was also dangerous because AI is not a human and it could not check indeed if the chosen person seriously needed the help. It is not like the help of aid workers who can talk with the people and notice if the person needs financial support. For example, let’s say that the person has one of the most damaged property and Skai detected it. That’s how that person, later on, gets the money. What if he or she is rich and does not need the money? Does AI detect it? No. Of course that the person can return the money, but firstly it would take time, and secondly some people are greedy, so they would not tell anyone about the situation. At that time, people who need the money do not get it and it can be crucial for them because of for example hunger.

Thanks for your time. Feel free to comment below. 🙂

References:

https://en.wikipedia.org/wiki/Hurricane_Ian#Aftermath

https://en.wikipedia.org/wiki/Hurricane_Fiona#Aftermath

https://screenshot-media.com/technology/ai/artificial-intelligence-hurricane-relief/

https://www.independent.co.uk/tech/hurricane-ian-damage-relief-ai-b2199411.html