Author Archives: Kalemba Kuba

Is AI the future of diagnosing cancer?

There are a few different tests and procedures which are used to diagnose breast cancer. The most important one is called a mammogram and is an X-ray of the breast. It can detect breast cancer as early as two years before the tumor develops to the point where it can be felt by a doctor.

Normal Dense Breast Tissue

Example of a mammogram

Interpreting a mammogram is a really difficult task – it takes 10 years of training as a doctor and specialist to become a radiologist that is allowed to do it. What is more, NHS (National Health Service – one of the national healthcare systems in the UK) requires two radiologists to perform an analysis of each mammogram. Whenever they disagree, the third radiologist is involved.

Taking all this into consideration, it seems like it is hard to improve the quality and accuracy of interpreting mammograms. However, according to a study in the journal Nature AI is now able to produce as good results as radiologists!

An international team of researchers created a computer model and trained it to detect breast cancer with the usage of the images of 29 000 women. The algorithm managed to provide even better results than single doctors. Comparing to single radiologist it had 1,2% less false positives and 2,7% less false negatives. When compared to the current system which includes two radiologists AI model was just as accurate as two doctors. However, reading X-rays is a time-consuming and tiring process, while AI analyses mammograms in just a few seconds.

Despite these fantastic results of the algorithm, it doesn’t mean that AI will now replace radiologists. We have to remember that it was just a research study – AI has not been let to perform analyses of mammograms in the clinic yet and when it is there will be one radiologist in charge as well. Although the algorithm could potentially liquidate the necessity of involving 2 radiologists. According to Prof Ara Darzi, director of the Cancer Research UK, the AI system will have a significant impact on improving the accuracy of diagnoses and will free up radiologists to work on even more important stuff. The shortage of as many as 1000 radiologists from the UK is estimated.

Combining both the human mind and AI in analyzing mammograms is probably the best option for now. As the system has only been tested in a research study it doesn’t seem rational to already let it work alone in such an important cause. However, as it can work 24/7 (humans cannot!) and is faster than humans it is great to combine those two. Maybe, when there will be a sufficient number of diagnoses performed by humans and AI, the results will show that AI can be let alone. For now, it seems like a great opportunity and shows that it may be possible to make screening even more accurate in the future!

Sources:

https://www.mayoclinic.org/diseases-conditions/breast-cancer/diagnosis-treatment/drc-20352475

https://www.cancercare.org/publications/82-early_detection_and_breast_cancer

https://en.wikipedia.org/wiki/National_Health_Service_(England)

https://www.bbc.com/news/health-50857759

https://www.bloomberg.com/news/articles/2020-01-02/google-shows-ai-can-spot-breast-cancer-better-than-doctors

Is AI going to make CAPTCHAs useless?

On the internet, most of us are required to prove that we are real humans rather than robots from time to time. Sometimes it is just about solving a simple math problem, sometimes we have to choose a few photos, etc. Those little and little bit annoying tests are called CAPTCHAs. Their job is to prevent bots from doing specific things, for example, creating fake accounts buying a tone of limited stuff at retail price to sell it for a higher price afterward. CAPTCHA’s are also designed to be ridiculously simple for humans yet difficult for machines.

Znalezione obrazy dla zapytania captcha

However, it seems like bots are getting better and better at solving them!  There is a Californian AI firm, Vicarious – it has been significantly funded by Amazon CEO Jeff Bezos and Mark Zuckerberg. Vicarious managed to crack the basic CAPTCHA already in 2013. It had 90% accuracy then! Although CAPTCHA designers weren’t sleeping either, over the years CAPTCHAs have been getting more difficult to outsmart smarter bots. However, they also had to keep being trivial for humans.

Google’s reCAPTCHA test seems like the best type of CAPTCHA right now, as it can only be solved by humans 87% of the time. I’m pretty sure you have all come across it in the past as it is being used very frequently. How it works then? Well, it asks users to choose the photos/blocks which include specific features like buses or traffic lights.

Znalezione obrazy dla zapytania recaptcha

Vicarious has been working on enabling their bots to overcome reCAPTCHA. According to the paper, their AI now beats Google’s test 66.6 percent of the time! But how Vicarious accomplished that? How their bots actually work? Vicarious based their AI on something they call „Recursive Cortical Network”. It is designed to mimic thought processes that typically occur in human brains. At the same time, those processes require less computing power than a full neural network. This results in AI being able to identify objects even if they are shaded or obscured just like on reCAPTCHA.

On the other hand, we have to remember that not many people have access to this powerful CAPTCHA-cracking AI. Most people still do not possess any useful and relevant bots in this area. Thanks to that, I think that CAPTCHAs are not entirely useless for now. However, CAPTCHA designers should work on improving them however, it seems like AI can come to the point where fooling automated bots will be impossible.

 

Sources:

www.artificialintelligence-news.com

https://science.sciencemag.org/content/358/6368/eaag2612.full

 

A bug which let anyone block your iPhone

Recently Apple managed to fix a pretty annoying bug, which let literally anyone temporarily lock another person’s iPhone or iPad.

Znalezione obrazy dla zapytania airdrop

Kishan Bagaria found that bug in AirDrop, which is an ad-hoc service, which allows its users to quickly transfer files between devices over Wi-Fi or Bluetooth without using a mass storage device. AirDrop supports iOS and macOS operating systems and was first introduced as one of the new features in Mac OS X Lion and iOS7.

So how this bug actually worked? Using an open-source tool, Bagaria could repeatedly send files again and again to all of the devices that were present in his wireless range (and had their AirDrop set to receive files from “everyone”).

But why sending so many files can be a problem for the recipient? Well, when an AirDrop file is received, iOS automatically blocks the whole display of the device (so once you are under attack you can’t even turn Bluetooth off) until its user accepts or rejects the following file. The bug occurred due to the fact that Apple didn’t set a limit for requests for a device, so a potential attacker could keep sending files infinitely, which resulted in creating a loop, where the user had to keep rejecting the files over and over again.

So what was the only way to escape from the attack? Bagaria said, that the only possibility was to run away. Once an attacked user was no more within the wireless range of an attacker, he/she was able to turn off Bluetooth to prevent further attacks.

Apple fixed that hilarious bug by simply setting a limit of requests over a short period of time. So once you update your iOs to 13.3 version, there is no need to worry about getting your phone blocked when your Bluetooth is turned on or when you are connected to public Wi-Fi!

Sources:

https://techcrunch.com/2019/12/10/ios-airdrop-lock-up-iphones/

AI on guard of the largest stock exchange

The Nasdaq stock market is the largest stock exchange by volume. Because of this, it became a popular target for fraudsters. That is why Nasdaq needs to be incessantly monitored to prevent frauds, which can come in many different ways like:
churning – stockbrokers are buying or selling client’s investments more often than necessary, to make more money in commissions
spoofing – placing a big buy/sell order (without intending to execute it) in order to create an artificial high/low demand for a particular stock

Nasdaq recently announced that monitoring is now being done with help of artificial intelligence. The deep-learning system is currently cooperating with human analysts to keep track of more than 17,5 million transactions every day. That system is an addition to the previously used software surveillance system, whose main goal is to detect any signs of possible frauds by the usage of statistics and set of rules. It works like this:
-> system detects a sign of a possible abuse -> the alert is being sent to human analysts -> human analysts decide whether it is a fraud or not

The new system will be more accurate at detecting frauds according to Nasdaq, which will result in less unnecessary work for analysts. Moreover, they claim that it will be better at identifying more complicated patterns such as spoofing, which is likely to become more popular among fraudsters.

How does the system identify abuses? It bases on historical examples. The system analyzed frauds from the past thoroughly and whenever things typical for them occur, it will reach out to an analyst which specializes in a particular area. For example, when fraud is detected in an automotive stock, the alert will be sent to an analyst who is familiar with the automotive industry and its market. Afterward, the analyst enters the results into the system. Thanks to that the deep-learning system can develop and learn from this data.

On the other hand, this system will only be able to detect frauds from the past. That is why it makes it probable that some hackers will be able to get around it. According to that, Doug Hamilton (Nasdaq’s managing director of artificial intelligence) has decided that the new system will be working along with the old one rather than fully replacing it, for now. What system needs, is to adapt faster to new tactics of fraudsters as „the patterns and types of abuse that are happening are constantly evolving as well” ~ Tony Sio (Nasdaq’s head of marketplace regulatory technology).

 

Sources:

https://www.technologyreview.com

https://dictionary.cambridge.org

 

Let Alexa organize your life for you!

When Amazon introduced a first generation of Echo, Alexa was able to fulfill simple desires of people such as telling a weather forecast or sounding morning alarms.

Now, after five years those simple desires became just the minor capabilities of Amazon’s product. Alexa is able to control more than 85 000 smart home products and execute 100 000 so called „skills”. Moreover this voice assistant processes billions of interactions every week which lets it generate enormous quantities of data about its users. Despite all of this, Amazon claims that this all is just a beginning.

Rohit Prasad, vice president and head scientist of Amazon Alexa, has revealed some information where the project is headed next. It turns out that the main goal for this voice assistant is to not only passively execute orders but to predict what the user might want or need. Alexa will now become a tool for shaping and organising your life or at least helping you with it. In order to achieve that goal, Alexa will need to know a lot more about you.

In September, Amazon launched a series of „on the go” Alexa products, such as Echo Buds (earphones) and Echo Loop („a smart ring”). They let Alexa gain significantly larger quantities of information about you, your daily habits and your life in general.

However Alexa will also require updates to process and use all of this information. Prasad’s team is working on developing things like basic voice and video recognition or improving language understanding.  They are also aiming to develop Alexa’s predicting and decision-making abilities and higher-level reasoning. Alexa will be learning which skills are typically used together and than make predictions based on that knowledge. For example, if many users order food when watching movies, Alexa will recommend those things in conjunction.

Sometimes you might ask Alexa through your home Echo device to send you some notifications, for example about your open stock positions. Whenever it is time to do so, you might not be at home. Alexa will then (already knowing that it was you and not another member of your family or your roommate, thanks to voice recognition) find out where you are, based on your last used Echo-enabled device. For example, if you will be driving the car, Alexa will send the notification directly to your car.

A new software is currently being tested. Alexa will work like this:

Alexa recognizes which of 100 000 skills the user wants to use -> Alexa analyzes the context in which it is used (who is the user, what device is he/she using, where) -> Alexa chooses how to react based on user’s previous choices.

All of this sounds like an amazing achievement, and from the scientific point of view it undoubtedly is. On the other hand, many people may be worried about their privacy, as Alexa will be following them everyday and everywhere.

 

References:

https://www.technologyreview.com/s/614676/amazon-alexa-will-run-your-life-data-privacy/

https://www.technologyreview.com/f/614436/amazons-new-products-show-it-wants-alexa-to-always-be-with-you/