Author Archives: 46399

Human brain cells play pong

Reading Time: < 1 minute

Some time ago quite a big innovation has been done. It is a “mini-brain” that can play pong. Just by itself. It is made of about 800 million living brain cells in a dish that analyzes neuron activity. Researchers from Cortical Labs have shared this video:

It shows how their “brain” works. What is impressive in this research is that it took about 90 minutes for AI to learn this task, and mini-brain it took about 5 minutes! But it should be said that AI, after learning, performed better. 

The chief scientific officer said, “We often refer to them [brain cells] as living in the Matrix”. These words raise some questions like, will we witness a real-life matrix? Researchers used human pluripotent stem cells to conduct their research. While its duration, the mini-brain was learning how to play the game best and it implies that the brain could be conscious. Everyone uses many kinds of electronic devices that use AI. If this technology became more popular and instead, we used brains that are conscious, we could be witnesses of the matrix. In my opinion, this type of technology raises some important questions about the consciousness of these brains, and whether using them in a “matrix” is moral or not.

Sources:

https://mindmatters.ai/2021/12/are-the-brain-cells-in-a-dish-that-learned-pong-conscious/

https://www.newscientist.com/article/2301500-human-brain-cells-in-a-dish-learn-to-play-pong-faster-than-an-ai/?utm_source=rakuten&utm_medium=affiliate&utm_campaign=2116208:Skimlinks.com&utm_content=10&ranMID=47192&ranEAID=TnL5HPStwNw&ranSiteID=TnL5HPStwNw-YYJfbxm4tb_nlAgJa56PYg

https://www.ign.com/articles/pong-human-brain-cells-faster-learn-ai

Phantom secure

Reading Time: 2 minutes

Phantom secure is a company that provided custom mobile phones with software that encrypted messages meaning that these devices were immune to things such as wiretapping or decryption. Basic functions such as calling, texting, or the internet were removed from them, instead, they had an encrypted system that allowed phones to communicate only with each other. Basically, it was a customized blackberry phone designed for maximum privacy. Phantom secure also was equipped with a kill switch. It means that when the phone was lost, stolen or anything else happened to it, you could remotely delete data from it and make it inoperable. Moreover, knowing that the device had no functions except sending encrypted emails, you could communicate only with people who also had this phone. 

This solution was good for keeping privacy, but it quickly became a tool for criminals. It allowed them to coordinate their actions undetected. Many drug smugglers including members of the Sinaloa cartel who used phantom secure to cover their activity, but also organizations like Hells Angels used phantom secure to carry out killings. It is believed that there were between 10,000 and 20,000 users of phantom secure and according to the FBI, all of them are included in criminal activity. 

In 2014, Hells angels made a killing in Australia and communicated with customized Blackberries to make it possible. Therefore, the Australian government was unable to detect the criminals responsible for this. After this event, Phantom mobiles were even more desired in criminal environments, because it was a tested product, that guaranteed privacy of its users.

 Demand for phones was huge. One of the Australian distributors had about 800 clients from different crime organizations. Soon, it was so popular that governments of different countries began to investigate phantom secure, finally leading to its downfall. The case of phantom secure says a lot about the influence of technology on crime, ranging from drug trafficking to murders. Even with newer and better technological advancements, they can still be used for wrong purposes, just like phantom secure.

Sources:

https://www.unodc.org/unodc/en/untoc20/truecrimestories/phantom-secure.html

https://www.vice.com/en/article/v7m4pj/the-network-vincent-ramos-phantom-secure

https://www.fbi.gov/news/stories/phantom-secure-takedown-031618

Rise of digital healthcare

Reading Time: 3 minutes

Quick advances in digital healthcare have become possible thanks to the COVID-19. But also, before that technology was starting to be recognized in healthcare. Pandemic has helped countries in the digitalization of healthcare.  Many governments use apps created by tech startups to help themselves efficiently fight the spread of covid, for example, chatbots that can give you medical advice and additionally, minimize the interactions between people or telemedicine thanks to which people can get help from doctors without leaving their home. 

Undoubtedly, many positive aspects come with the presence of technology within this sector. Everything is faster, safer, and more efficient. Even greater is that AI and machine learning make healthcare more available. People from poorer regions can seek help without spending an enormous sum of money on diagnosis or treatment. One of the most basic technologies related to health used in Africa is SMS text that reminds people to take medicine for HIV or tuberculosis. It is also possible for people who live on isolated lands to communicate with doctors and get much-needed care at much less cost.

During the last few years, many startups regarding digital health have been created and mostly it is a good thing. Such applications as TrustCircle can keep your mental health in check and inform you about your condition before it might be too late. It is used mostly in colleges and corporates where the atmosphere is rather competitive, and people might need some sort of help

During the COVID-19 era, machine learning also has been a great tool for forecasting the spread of viruses and made it possible for governments to better prepare for pandemics.

Applications that are used by officials to control the quarantine and track singular people can be very helpful in limiting the spread of infection, for example, in China. But they can also be a threat to human rights and privacy. If tracking movement of people would be justified daily uncontrolled, that could lead to discrimination of minorities and preventing them from seeking medical attention.

But usage of such applications requires a lot of patient medical data, and there are suspicions that the privacy of such information isn’t properly protected and can be used for wrong purposes, especially if it isn’t regulated by the government. In the USA a study was made where 33 of 50 questioned hospitals were working with private companies like Google, Amazon, or Microsoft without open rules regarding data protection. It means that patient data isn’t protected and is used by private companies. Furthermore, one review on one of the medical-related apps on android found that 79 percent of user data has been shared with other private companies, such as ad companies. Also, worth mentioning is the current state of data security. Medical data is a very sensitive one and its safety is crucial for digitalizing healthcare. Already in 2017, there had been a case of breaking data protection laws by NHS trust fund. It shared medical records of over 1.6m British to DeepMind. Google faced then lawsuit (DeepMind belongs to Google) over data-sharing. This shows that data isn’t protected and is shared without our consent. We shouldn’t go forward with progress if we can’t guarantee basic rights to our data, especially as sensitive as medical data.

Applications that are used by officials to control the quarantine and track singular people can be very helpful in limiting the spread of infection, for example, in China. But they can also be a threat to human rights and privacy. If tracking movement of people would be justified daily uncontrolled, that could lead to discrimination of minorities and preventing them from seeking medical attention.

The world is certainly changing, digital healthcare is a promising change for our old-fashioned ways, but a lot of effort yet needs to be made to make some real progress in medicine without endangering the rights and privacy of people.

Sources:

https://www.ft.com/content/7aba9066-dffe-4829-a1cd-1d557b963a82

https://www.ft.com/content/a3095835-2416-4235-967b-7986d1678601

https://www.ft.com/content/cdc166d4-6845-11ea-a6ac-9122541af204

https://www.ft.com/content/31c927c6-684a-11ea-a6ac-9122541af204

https://www.thehindu.com/news/cities/mumbai/app-tracks-mental-health-provides-timely-resources/article24256849.ece

Is GPT-3 the future of journalism?

Reading Time: 3 minutes

What even is GPT-3? The abbreviation stands for Generative Pre-trained Transformator 3. It was released in June 2020 by OpenAI and has been quite controversial since then. Additionally, GPT-2 (the previous version of the language model) has 1,5bn parameters, while the newest version has 175bn, which is a huge difference and makes GPT-3 the biggest AI text generating program. It can create any kind of text, from news to even simple computer code.  

https://www.sigmoid.com/blogs/gpt-3-all-you-need-to-know-about-the-ai-language-model/

A few journalists have released articles regarding GPT-3 and how it could affect our world. Some even say it is one of the major technological advancements they had seen. One of the biggest features of this AI is that the text written by the GPT-3, which was included in the articles, was indistinguishable from the text written by humans. Here is an example of an article created entirely by GPT-3 https://maraoz.com/2020/07/18/openai-gpt3/. Seems like it was written by a real person, not a bot. The GPT-3 is also capable of creating tweets. There also appears to be an ethical issue. In case, text generators become popular enough to be used by many people, the internet might be filled with AI-generated content. The question here is whether we should be informed of this? I think that we should be aware of such things. People may not want to speak with machines and should be able not to if they don’t want it.

There are also downsides to this technology. As great as it seems, it can be used for the wrong purposes. Fake news is one of the examples. If similar technology becomes more available, the limitations of propaganda, which currently needs humans, will disappear and it would be possible to use such a bot to spread fake news and misinformation. The harmful effect of propaganda is that for example in some countries it discouraged people from getting vaccinated. With the usage of a text generator that can write an enormous amount of text in a very short time, the fake news would spread even faster, and it would be hard to recognize. According to a study, which you can find here: https://www.sigmoid.com/blogs/gpt-3-all-you-need-to-know-about-the-ai-language-model/ humans were able to detect only about 50% of the texts generated by GPT-3. That means that if this AI was released on a larger scale, recognizing what is fake news and what is not, would be a lot harder.

GPT-3 is also proven to have biases. In a study where the experiment was whether the AI prefers males or females (https://arxiv.org/pdf/2005.14165.pdf p.36), in many cases, it was male leaning. For example, for occupations requiring a higher level of education male identifier was used more frequently and for jobs like a nurse or housekeeper GPT-3 preferred female identifier. 

GPT-3 isn’t the perfect tool to be used widely yet. It has its limitations and imperfections, although I think that it is inevitable that in the future AI such as GPT-3 will have a major role in our society and future journalism may differ from the one we are used to today. Probably it isn’t happening soon, but such models as GPT-3 bring us closer to the new era of journalism. It is also worth mentioning that language model AI could be a dangerous weapon for misinformation and if used wrongly it can make our lives harder instead of easier. That is why such technology needs to have some sort of security.

Sources:

https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/

https://www.sigmoid.com/blogs/gpt-3-all-you-need-to-know-about-the-ai-language-model/

https://www.ft.com/content/beaae8b3-d8ac-417c-b364-383e8acd6c8b

https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/

https://maraoz.com/2020/07/18/openai-gpt3/