Author Archives: 46399

Rise of digital healthcare

Reading Time: 3 minutes

Quick advances in digital healthcare have become possible thanks to the COVID-19. But also, before that technology was starting to be recognized in healthcare. Pandemic has helped countries in the digitalization of healthcare.  Many governments use apps created by tech startups to help themselves efficiently fight the spread of covid, for example, chatbots that can give you medical advice and additionally, minimize the interactions between people or telemedicine thanks to which people can get help from doctors without leaving their home. 

Undoubtedly, many positive aspects come with the presence of technology within this sector. Everything is faster, safer, and more efficient. Even greater is that AI and machine learning make healthcare more available. People from poorer regions can seek help without spending an enormous sum of money on diagnosis or treatment. One of the most basic technologies related to health used in Africa is SMS text that reminds people to take medicine for HIV or tuberculosis. It is also possible for people who live on isolated lands to communicate with doctors and get much-needed care at much less cost.

During the last few years, many startups regarding digital health have been created and mostly it is a good thing. Such applications as TrustCircle can keep your mental health in check and inform you about your condition before it might be too late. It is used mostly in colleges and corporates where the atmosphere is rather competitive, and people might need some sort of help

During the COVID-19 era, machine learning also has been a great tool for forecasting the spread of viruses and made it possible for governments to better prepare for pandemics.

Applications that are used by officials to control the quarantine and track singular people can be very helpful in limiting the spread of infection, for example, in China. But they can also be a threat to human rights and privacy. If tracking movement of people would be justified daily uncontrolled, that could lead to discrimination of minorities and preventing them from seeking medical attention.

But usage of such applications requires a lot of patient medical data, and there are suspicions that the privacy of such information isn’t properly protected and can be used for wrong purposes, especially if it isn’t regulated by the government. In the USA a study was made where 33 of 50 questioned hospitals were working with private companies like Google, Amazon, or Microsoft without open rules regarding data protection. It means that patient data isn’t protected and is used by private companies. Furthermore, one review on one of the medical-related apps on android found that 79 percent of user data has been shared with other private companies, such as ad companies. Also, worth mentioning is the current state of data security. Medical data is a very sensitive one and its safety is crucial for digitalizing healthcare. Already in 2017, there had been a case of breaking data protection laws by NHS trust fund. It shared medical records of over 1.6m British to DeepMind. Google faced then lawsuit (DeepMind belongs to Google) over data-sharing. This shows that data isn’t protected and is shared without our consent. We shouldn’t go forward with progress if we can’t guarantee basic rights to our data, especially as sensitive as medical data.

Applications that are used by officials to control the quarantine and track singular people can be very helpful in limiting the spread of infection, for example, in China. But they can also be a threat to human rights and privacy. If tracking movement of people would be justified daily uncontrolled, that could lead to discrimination of minorities and preventing them from seeking medical attention.

The world is certainly changing, digital healthcare is a promising change for our old-fashioned ways, but a lot of effort yet needs to be made to make some real progress in medicine without endangering the rights and privacy of people.

Sources:

https://www.ft.com/content/7aba9066-dffe-4829-a1cd-1d557b963a82

https://www.ft.com/content/a3095835-2416-4235-967b-7986d1678601

https://www.ft.com/content/cdc166d4-6845-11ea-a6ac-9122541af204

https://www.ft.com/content/31c927c6-684a-11ea-a6ac-9122541af204

https://www.thehindu.com/news/cities/mumbai/app-tracks-mental-health-provides-timely-resources/article24256849.ece

Is GPT-3 the future of journalism?

Reading Time: 3 minutes

What even is GPT-3? The abbreviation stands for Generative Pre-trained Transformator 3. It was released in June 2020 by OpenAI and has been quite controversial since then. Additionally, GPT-2 (the previous version of the language model) has 1,5bn parameters, while the newest version has 175bn, which is a huge difference and makes GPT-3 the biggest AI text generating program. It can create any kind of text, from news to even simple computer code.  

https://www.sigmoid.com/blogs/gpt-3-all-you-need-to-know-about-the-ai-language-model/

A few journalists have released articles regarding GPT-3 and how it could affect our world. Some even say it is one of the major technological advancements they had seen. One of the biggest features of this AI is that the text written by the GPT-3, which was included in the articles, was indistinguishable from the text written by humans. Here is an example of an article created entirely by GPT-3 https://maraoz.com/2020/07/18/openai-gpt3/. Seems like it was written by a real person, not a bot. The GPT-3 is also capable of creating tweets. There also appears to be an ethical issue. In case, text generators become popular enough to be used by many people, the internet might be filled with AI-generated content. The question here is whether we should be informed of this? I think that we should be aware of such things. People may not want to speak with machines and should be able not to if they don’t want it.

There are also downsides to this technology. As great as it seems, it can be used for the wrong purposes. Fake news is one of the examples. If similar technology becomes more available, the limitations of propaganda, which currently needs humans, will disappear and it would be possible to use such a bot to spread fake news and misinformation. The harmful effect of propaganda is that for example in some countries it discouraged people from getting vaccinated. With the usage of a text generator that can write an enormous amount of text in a very short time, the fake news would spread even faster, and it would be hard to recognize. According to a study, which you can find here: https://www.sigmoid.com/blogs/gpt-3-all-you-need-to-know-about-the-ai-language-model/ humans were able to detect only about 50% of the texts generated by GPT-3. That means that if this AI was released on a larger scale, recognizing what is fake news and what is not, would be a lot harder.

GPT-3 is also proven to have biases. In a study where the experiment was whether the AI prefers males or females (https://arxiv.org/pdf/2005.14165.pdf p.36), in many cases, it was male leaning. For example, for occupations requiring a higher level of education male identifier was used more frequently and for jobs like a nurse or housekeeper GPT-3 preferred female identifier. 

GPT-3 isn’t the perfect tool to be used widely yet. It has its limitations and imperfections, although I think that it is inevitable that in the future AI such as GPT-3 will have a major role in our society and future journalism may differ from the one we are used to today. Probably it isn’t happening soon, but such models as GPT-3 bring us closer to the new era of journalism. It is also worth mentioning that language model AI could be a dangerous weapon for misinformation and if used wrongly it can make our lives harder instead of easier. That is why such technology needs to have some sort of security.

Sources:

https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/

https://www.sigmoid.com/blogs/gpt-3-all-you-need-to-know-about-the-ai-language-model/

https://www.ft.com/content/beaae8b3-d8ac-417c-b364-383e8acd6c8b

https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/

https://maraoz.com/2020/07/18/openai-gpt3/