Author Archives: Alina Minenko

AI is creating human proteins that can treat cancer, COVID and flu

Reading Time: 2 minutes

I think that almost all of you are familiar with a new shocking AI technology called Dall-e, which was introduced quite recently. This is a platform that generates images by just specifying what you wish to view. Social media sites are now crowded with these surprisingly detailed, often photorealistic images created by this or similar technologies. However, some scientists perceive it as more than just a means of creating pictures. They see it as a way to treat various diseases, such as cancer or flu.

Recently, by using these modern AI technologies, scientists have started generating blueprints for new proteins – tiny biological mechanisms that play a significant role in our bodies’ operation, ranging from digesting food to moving oxygen through the bloodstream. Although these proteins are produced naturally in our bodies, researchers are still striving to improve the ability to fight diseases and do things that our bodies can not produce on their own.

For more than 30 years, David Baker, the head of the Institute for Protein Design at the University of Washington, has worked to develop artisanal proteins. He and his colleagues have established this was feasible by 2017. However, they did not anticipate how the emergence of new AI technologies would radically speed up this task, cutting the period of time required to produce new blueprints from years to only a few weeks.

Proteins are made up of long chains of chemical components that then twist and fold into three-dimensional structures. Recent research from AI labs like DeepMind, which is owned by Alphabet, has demonstrated that neural networks can successfully predict the three-dimensional shape of any protein in the body based only on the smaller compounds it contains.

T1037, part of a protein from (Cellulophaga baltica crAss-like) phage phi14:2, a virus that infects bacteria.

Nowadays researchers are taking a step further by creating blueprints for totally new proteins that do not exist in nature, by using AI systems. The objective is to develop proteins that adopt highly specific shapes. A particular shape can perform a certain function, such as preventing the COVID-19 virus. Researchers can provide a rough description of the protein they want, then a diffusion model can generate its three-dimensional shape. However, scientists still need to test it in a wet lab with actual chemical compounds to make sure it functions as expected.

On the one hand, some experts take this innovation with a grain of salt. Frances Arnold, a Nobel laureate, comments it as “Just a game”. He states that what really matters is what a generated structure can actually do.

On the contrary, Andrei Lupas, an evolutionary german biologist, is convinced that it will change medicine, research and bioengineering. “It will change everything”. AlphaFold has helped him to find the structure of a protein he was tinkering with for almost a decade.

Personally I agree with a majority of researches and assume that AI is a tool for exploring new innovations that scientists could not previously think on their own.

References:

https://www.seattletimes.com/nation-world/artificial-intelligence-intelligence-turns-its-artistry-to-creating-human-proteins/

https://www.nature.com/articles/d41586-020-03348-4

https://www.scientificamerican.com/article/one-of-the-biggest-problems-in-biology-has-finally-been-solved/

Tagged ,

An AI has passed a university law exam

Reading Time: 2 minutes

Artists, drivers, authors, voice actors, and plagiarists are just a few of the occupations that artificial intelligence is already fighting for. Less than a month after using AI to help a man fight a speeding ticket, an AI merely known as “Claude” completed a law and economics exam at George Mason University. Recently discovered AI, developed by the research company Anthropic, passed the exam with a “marginal pass.” The program was funded by accused crypto fraudster Sam Bankman-Fried.

It is being positioned as a challenger to ChatGPT, which is the AI text generator from OpenAI that has swept the Internet. Claude is now in closed beta status in contrast to ChatGPT. Anthropic refers to Claude’s approach as “Constitutional AI,” which is intended to reply to adversarial questions. Other artificial intelligence programs are frequently rendered useless when faced with conflicting issues.

According to economics professor Alex Tabarrok, the exam was assessed anonymously. He also described Claude as “a competitor” and “an improvement” to OpenAI’s GPT. Tabarrok did point out that the response had certain flaws, including the fact that it was “mainly opinion,” and that a stronger response would have included more economic justification. 

Enterprise AI app developer Scale discovered that Claude is “more entertaining than ChatGPT”. “Its ability to write coherently about itself, its constraints, and its objectives seem to also allow it to more naturally answer inquiries on other subjects.”

To be fair, not everyone was as impressed with Claude’s achievements. The Financial Times published an article on Tabarrok’s conclusions and stated, “To be honest, this seems more like Claude merely consumed and puked up a McKinsey study.”

Of course, the fact that Claude only managed to receive a “marginal pass” shows that there is still a long way to go before artificial intelligence software can successfully replace attorneys. Until recently, people didn’t even believe that AI could replace lawyers until recently. So in the blink of an eye AI will be able to challenge the best lawyers.

References:

https://futurism.com/ai-passing-grade-law-exam

https://www.businessinsider.com/ai-financed-by-sam-bankman-fried-passed-law-economics-exam-2023-1?IR=T

https://i.insider.com/63c9233ceee94d001a78e694?width=2000&format=jpeg&auto=webp

Tagged ,

Telesurgery. Worthwhile or dangerous?

Reading Time: 2 minutes

Would you ever believe that surgeons will be able to operate on a patient even though they are 400 km away? That is exactly what telesurgery can allow. It is an innovative surgical tool that connects patients and surgeons who are geographically distant. The surgeon observes the surgical area on the screen and uses a haptic arm to move the robotic arm during the operation.

On the one hand, there are many benefits of telesurgery in comparison to conventional surgical methods. First and foremost, telesurgery is an excellent solution for those who for some reason can not travel to get medical care. Not only financial constraints but also travel-related health issues can pose a problem for some people. Secondly, it enables surgery through smaller incisions and its robotic arms are able to reach hard-to-access areas in the body. It also eliminates a surgeon’s possible tremor resulting in improved surgical accuracy. Consequently, the risk of damaging surrounding structures, the risk of blood loss, and the risk of infection are alleviated. Aside from this, telesurgery gives surgeons from different centres an opportunity to collaborate and operate on a patient simultaneously. 

On the other hand, there are some issues in the field of telerobotic surgery. Firstly, a time lag is considered to be a major drawback while using telesurgery. It was determined that a time delay of more than 2 seconds can be a threat. Secondly, being operated on by a surgeon, a patient has never met face-to-face, can cause distrust and anxiety. And finally, a researcher at Obuda University in Budapest who studies space telesurgery, Tamas Haidegger, noted that despite having a master surgical plan, things can go wrong. For example, blood circulation can collapse, or there is an unforeseen reaction to certain drugs. That is why there is still a necessity to have a trained surgeon on-site. Nonetheless, he believes that soon robots will be augmented with artificial intelligence and will be able to go into autopilot mode. It would be a significant breakthrough in human history! 

Having considered all possible pros and cons of telesurgery, in my opinion, the technology is worth being widely embedded. I would agree that it can scare some people that a robot is performing an operation. However, in reality, surgeons are in full control of the machine at all times and the robot’s movements are far more precise. 

https://www.cureus.com/articles/54068-telesurgery-and-robotics-an-improved-and-efficient-era

https://www.bbc.com/future/article/20140516-i-operate-on-people-400km-away

https://my.clevelandclinic.org/health/treatments/22178-robotic-surgery

https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.zdnet.com%2Farticle%2Fsurgery-digitized-telesurgery-becoming-a-reality%2F&psig=AOvVaw24ZZag0koxCxL8Q79N4bMp&ust=1672333433108000&source=images&cd=vfe&ved=0CBAQjRxqFwoTCIjfz9PlnPwCFQAAAAAdAAAAABAj

Tagged , ,

Glasses that let deaf people read conversations

Reading Time: 3 minutes

“Artificial Intelligence will make people’s life easier”. I have seen this phrase lots of times. And once again I am convinced that this is true. 

Have you ever thought about how hard it is to be deaf? Having fewer job opportunities, social withdrawal, and emotional problems caused by a drop in self-esteem and confidence. And now imagine the possibility to READ speech in real-time by just wearing glasses. This is what augmented reality (AR)-powered smart glasses can allow. The XRAI Glass software converts speech into subtitles which appear on the user’s glasses screen. Moreover, it is tethered to the live-captioning app that translates over 9 languages, with more coming in the next few months. The translation takes place near-real time, allowing users to keep up with the conversation. All processing is done on the mobile phone so the glasses themselves are lightweight and comfortable to wear all day long. The glasses can be plugged even into a TV to display subtitles. In fact, one of the XRAI’s trial group, a lawyer, is using the glasses plugged directly into the mic system of the court. 

HOW HAS THE IDEA EMERGED?

The XRAI Glass technology was inspired by one of the co-founders’ 97-year-old deaf grandfather: “If he’s enjoying subtitles while watching television, why can’t we subtitle his life?” This is how the history of XRAI Glass started.

PRICE

The live-captioning app is now free to download, but users will be asked to pay for premium features if desired (up to $49.99/month). The glasses separately cost $484 and can be purchased on EE or Amazon. 

400 XRAI smart glasses turn audio into captions to let deaf people 'SEE'  conversations | Daily Mail Online

DRAWBACKS

The company is planning to reach 70,000 people by the end of 2023. However, some improvements should be made before these glasses are incorporated into daily usage. Some people state that glasses may struggle to understand group conversations where people are speaking over each other. Aside from this, a quiet environment is a prerequisite for accurate speech interpretation. That is why in July 2022 XRAI Glass launched a public trial period where people can try and share their experiences using smart glasses. The company aims to expand AI capabilities even more to provide the best service it can. 

PERSONAL OPINION

What I like the most is that people will never know what are you looking at while talking with them. Those who are suffering from hearing loss will be able to feel equal to other people and will not now see themselves as “exceptional” or “unusual”. Moreover, these glasses are not a good solution just for deaf people, but also for those who are facing the problems of consuming information in an audial way. That means that they will be able to hear and read speech simultaneously wearing smart glasses. 

I watched a couple of videos where deaf people are trying these glasses for the first time and was touched to see them broken down in tears by their hope for a better, easier life. I am happy that modern technologies are able to facilitate the daily life of people with hearing disabilities and even better solutions will come in the future as AI is progressing at unbelievable speed.

References:

https://metro.co.uk/2022/08/01/smart-glasses-allow-deaf-people-to-see-conversations-with-subtitles-17106685/

https://metro.co.uk/2022/11/17/we-tried-the-smart-glasses-that-let-you-see-conversations-17764768/

https://www.hackread.com/ai-powered-smart-glasses-deaf-speech/

https://www.businesswire.com/news/home/20220728005555/en/XRAI-Glass-Revolutionary-New-Glasses-Allow-Deaf-People-and-People-Who-Have-Hearing-Loss-to-%E2%80%98See%E2%80%99-Conversations

https://www.cbsnews.com/miami/news/captioned-smart-glasses-let-deaf-people-see-rewind-conversations/

Tagged

What is web3? Is it really decentralized?

Reading Time: 2 minutes

Not a long time ago the Internet was all about pages where you could only read information. In contrast, now, people have an opportunity to create and share content online. So how did we get here?

Before we explore the definition of web3, it is worthwhile to break down the history of the Internet into three periods: web1, web2, and web3. Web1 is read-only, web2 is read-write and web3 is read-write-own. Let’s now figure out what it means. 

Web1 was created in 1990 and featured mainly static websites, allowing users to only view information. Any online interactions were limited. The next generation of the Internet arrived in the form of web2. Instead of providing just content to users, companies also began to engage user-to-user interaction by utilizing platforms that allowed people to create content. However, big tech companies got control over the content and used it for monetary gains. For example, Amazon and Facebook collected personal information to facilitate better target marketing. Users became concerned about their data privacy and digital identity. Obviously, a decentralized version of the web with users in control of their data was a necessity. That is how web3 emerged. 

It was aimed to create peer-to-peer network using blockchain technology that enabled users to take away the dominance of tech giants and get a complete ownership of their data. Users could connect with each other, share data and engage in transactions privately without depending on intermediaries. All concepts of web3 significantly contribute to the formation of a decentralized system. It means that no single centralized server can control the data, there is no central authority that governs decision-making and there is no central place to store information. Sounds nice, doesn’t it? But isn’t it too good to be true? 

To answer this question, we can refer to the chart below:

It shows that the top 9% of accounts hold 80% of the whole market value of NFTs on the Ethereum blockchain. Moreover, the chart on the right depicts that only 2% of accounts own 95% of Bitcoin. Summarizing the given data, we can make a conclusion: “The advertised decentralization of power out of the hands of a few has been a re-centralization of power into the hands of fewer”. 

Web3 is also accused of the existence of middlemen. A majority of decentralized applications, so-called dApps, rely on centralized services. Otherwise, it would be extremely time-consuming and capital-intensive for developers to run their own servers. Some platforms, such as Alchemy and Moralis, help build up dApps much faster. 

Despite all contradictions, Web3 is still considered to be a novelty, a young and evolving system, so a lot of developers are continuing working on its improvements. We are on the way to creating a better web. Although web3 currently depends on centralized infrastructure, it takes time to build a high-quality and reliable one.

References:

https://venturebeat.com/datadecisionmakers/the-hard-truths-about-web3-what-no-one-else-is-talking-about/

https://nftnow.com/guides/a-comprehensive-guide-to-web3-the-future-of-the-internet/

https://101blockchains.com/top-web3-features/

https://101blockchains.com/web3-faqs/

https://ethereum.org/ru/web3/

https://www.notboring.co/p/the-web3-debate?s=r

Tagged ,