Author Archives: Anna Szkwarek

Artificial Intelligence and Business Areas

Reading Time: 2 minutes

Nowadays, some organisations look at applying AI in order to solve problems by applying it onto existing processes to automate them or add insights. Other organisations undertake a total makeover of their business overhauling the entire organisation with AI. However, neither of these two options can deliver speed of change as well as level of change companies need in order to develop.

Organisations need to identify business domains which are broad enough and as a result AI will improve their financial performance as well as customer and employee experiences. The areas should be identified along with some characteristics :

  • Areas which will create solutions to systemic business problems (e.g. chronic process inefficiency and difficulties getting services and goods to customers) needs to be addressed together.
  • Areas with sponsors and teams. Reinvention with AI needs a supportive business leader (or maybe even multiple leaders) and a team which will take roles such as product owner, translator and change lead. AI practitioners should also be included in the area.
  • Area with AI assets and reusable data since it is of significant importance to have domains in which the data needed to run the AI models overlap. Consequently, every new AI project with such a domain can be built based on previous projects instead of starting from zero.

In the majority of cases, leaders are able to identify from eight to ten domains in order for AI to completely transform their business. Nevertheless, organisations are usually succeeding when they pay attention to one or two main domains to start. These domains are based on value and leadership support so that they can build up their capabilities and skills before moving on.

Most leaders can likely identify around eight to ten domains where AI can transform their business. However, we find that companies are most successful when they focus on one or two priority domains to start—based on value, feasibility, and leadership support—so they build up their capabilities and skills before expanding further.

Source: https://www.mckinsey.com/featured-insights/artificial-intelligence#

Most common cases of Machine Learning effectiveness

Reading Time: 2 minutes

Since the coronavirus pandemic started, many companies have had difficulties including the IT industry. Unnecessary waste of managing businesses was created mainly because of the transition to remote work. Nevertheless, data science and machine learning still show that expansion is almost limitless.

Moreover, machine learning and its algorithms discover insights which are based on data from the real world and can be used to predict the future. When new data becomes available machine learning programs adapt and produce new predictions. However, there are still some occasions when technology outperforms linear and statistical algorithms. Below you will see a list of the three most common situations where machine learning has a big influence:

Engineers experiencing problems when coding rules

Human-oriented tasks such as recognising spam emails are rule-based solutions. Engineers have to write and update many lines of code whenever a variety of factors influence an answer. When too many factors are influencing the rules, it becomes very difficult for humans to make precise coding rules. Machine learning programs are free to encode actual patterns and only need algorithms to extract patterns automatically.

A solution for millions of cases

Thousands of payments can be categorised as fraudulent or not, but this can be very tedious or even impossible in some cases when dealing with millions of transactions. When user bases becomes larger it is no longer possible to process payments by hand. Machine learning provides an effective solution at handling types of large-scale problems with no human intervention.

Possible, but not cost-efficient

Some situations could process very quickly yet at a high cost. When a person assesses DMV forms for in-state and cross-state car purchases to determine the validity the business processes are well-defined and may take only a few minutes to check the forms separately. On the other hand, allocating so much manual labor would definitely not be budget-friendly. This is when machine learning could offer pay-as-you-go pricing for fully scaled operations.

Overall, it is of significant importance to bear in mind that machine learning is just tool. Machine learning models provide advanced algorithms which identify patterns in data and cases which are properly applied to the right use can reduce the amount of time spent on IT operations. This adds significant business value and reduces the costs.

Source: https://www.provintl.com/blog/5-common-machine-learning-problems-how-to-beat-them

AI risk is a global problem

Reading Time: 2 minutes

Nicolas Miailhe on the case for global coordination on AI

Leo Szilard, a nuclear physicist, wrote a letter to the British Admiralty in December 1938. The letter told the British Admiralty that the physicist had given up on his invention – the nuclear chain reaction. Around the same time a research team in Berlin had already started working on the uranium atom. In the next year, the Manhatta Project had already begun and by 1945 the first atomic bomb was dropped on Hiroshima. Four years later Russia tested the first atomic weapon successfully. This nuclear story is very similar to the field of Artificial Intelligence.

Nowadays, people view Artificial Intelligence’s development as a potential global risk. As a result, the technology has to be managed globally. Just like international organisations have reduced the risk of nuclear war, Artificial Intelligence’s risks also need to be reduced globally. Global coordination around AI to mitigate its potential negative impacts is needed.

Nicolas Miailhe is a leading expert on AI’s global coordination problems as well as the founder of The Future Society which is a global nonprofit organisation whose primary goal is to encourage adoption of AI in a responsible way and ensure that governments worldwide will identify the risks of it. Nicolas has graduated from Harvard Kennedy School of Government and is currently giving recommendations about AI policy. A list of Nicolas opinions of AI coordination is given below:

  • Customizing phishing attacks at scale can be done by the same AI system which deals with the writing of tedious emails. The opportunities and challenges of AI can be linked. Slowing down the development of AI would definitely be undesirable and so regulation as well as investment are the key to the solution.
  • In order to regulate AI, firstly a mutual concept of it needs to be established. International agreements on AI can be made only if people share the same understanding of it.
  • Nicolas also claims that addressing disagreements over AI timelines is of significant importance. More precisely, AI researchers have come to the conclusion that human-level general AI will be achieved over the next decade. Consequently, they believe people should pay more attention to AI safety and alignment. The founder of Coursera, Andrew Ng, says that worrying about AI safety is “like worrying about overpopulation on the planet Mars”.

Source:
https://www.youtube.com/watch?v=cKclc-KThIE

Amazon makes three major AI announcements during re:Invent 2019

Reading Time: 3 minutesAmazon has kicked off its annual re:Invent conference in Las Vegas and made three major AI announcements.

During a midnight keynote, Amazon unveiled Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer.

Transcribe Medical

The first announcement we’ll be talking about is likely to have the biggest impact on people’s lives soonest.

Transcribe Medical is designed to transcribe medical speech for primary care. The feature is aware of medical speech in addition to standard conversational diction.

Amazon says Transcribe Medical can be deployed across “thousands” of healthcare facilities to provide clinicians with secure note-taking abilities.

Transcribe Medical offers an API and can work with most microphone-equipped smart devices. The service is fully managed and sends back a stream of text in real-time.

Furthermore, and most importantly, Transcribe Medical is covered under AWS’ HIPAA eligibility and business associate addendum (BAA). This means that any customer that enters into a BAA with AWS can use Transcribe Medical to process and store personal health information legally.

SoundLines and Amgen are two partners which Amazon says are already using Transcribe Medical.

Vadim Khazan, president of technology at SoundLines, said in a statement:

“For the 3,500 health care partners relying on our care team optimisation strategies for the past 15 years, we’ve significantly decreased the time and effort required to get to insightful data.”

SageMaker Operators for Kubernetes

The next announcement is Amazon SageMaker Operators for Kubernetes.

Amazon’s SageMaker is a machine learning development platform and this new feature lets data scientists using Kubernetes train, tune, and deploy AI models.

SageMaker Operators can be installed on Kubernetes clusters and jobs can be created using Amazon’s machine learning platform through the Kubernetes API and command line tools.

In a blog post, AWS deep learning senior product manager Aditya Bindal wrote:

“Customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale.”

Amazon says that compute resources are pre-configured and optimised, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete.

SageMaker Operators for Kubernetes is generally available in AWS server regions including US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland).

DeepComposer

Finally, we have DeepComposer. This one is a bit more fun for those who enjoy playing with hardware toys.

Amazon calls DeepComposer the “world’s first” machine learning-enabled musical keyboard. The keyboard features 32-keys and two octaves, and is designed for developers to experiment with pretrained or custom AI models.

In a blog post, AWS AI and machine learning evangelist Julien Simon explains how DeepComposer taps a Generative Adversarial Network (GAN) to fill in gaps in songs.

After recording a short tune, a model for the composer’s favourite genre is selected in addition to setting the model’s parameters. Hyperparameters are then set along with a validation sample.

Once this process is complete, DeepComposer then generates a composition which can be played in the AWS console or even shared to SoundCloud (then it’s really just a waiting game for a call from Jay-Z).

Developers itching to get started with DeepComposer can apply for a physical keyboard for when they become available, or get started now with a virtual keyboard in the AWS console.

Resources:
1.https://artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/

AI can help people with bipolar disorder

Reading Time: 2 minutesTwo professors explain how artificial intelligence can help to improve the lives of individuals living with bipolar disorder.

Melvin McInnis, professor of bipolar disorder and depression, and Emily Mower Provost, associate professor of computer science and electrical engineering, are working on an AI system to help unlock deeper insights about individual bipolar sufferers.

Speaking on a panel titled “Artificial Intelligence, Personalized Technology, and Mental Health” at the Ann Arbor District Library in Michigan, the professors first discussed their reasons behind getting involved with the project.

For Provost, her reason is a passion for the application of engineering to have a positive impact on people’s lives. “It gives me an opportunity not only to try to create new and really innovative algorithms, but when you put a human-centred swing into AI, then you also have the opportunity to really join engineering and science,” Provost said.

McInnis has been researching bipolar for over 30 years. During his time as a physician, he has met family members of bipolar sufferers who can detect when their relative is at risk of an episode through their speech.

Using Provost’s knowledge of algorithms, McInnis hopes to teach an AI to recognise such changes in the speech of bipolar sufferers. Detecting these changes will help to provide early notice of an impending episode.

“Our work is to identify biological markers that are physiological markers that are in speech,” McInnis said. “How can we teach the computer to do what the family member is doing?”

If successful, the project will help to improve the lives of those living with bipolar and those around them. For example, an alert could be set up if signs of an impending episode are detected.

With the peace-of-mind of such an alert, relatives could go about their day without feeling they need to be around constantly to watch for potential signs of an episode. Meanwhile, the bipolar sufferer can benefit from greater independence and ensure they get help promptly if needed.

“Your device can give an alert and say, ‘Maybe you should talk to your doctor soon,’” McInnis explained. “You can share this information with your care team, with your support network, so that you can be part of a team that’s helping you stay healthy longer.”

One of the key challenges in rolling out such technology internationally is cultural differences around the globe; even a smile can be seen differently from culture-to-culture. A baseline will need to be established for each individual of what’s “normal” for that person.

McKinnis says that, in his work with bipolar patients, “up to 20 percent of these individuals end their lives by suicide”. Hopefully, the project becomes another example of how AI can not only have a life-changing impact; but a life-saving one too.

Resources:
1.https://artificialintelligence-news.com/2019/09/30/researchers-ai-help-people-bipolar-disorder/

McAfee: Keep an eye on the humans pulling the levers, not the AIs

Reading Time: 2 minutesSecurity firm McAfee has warned that it’s more likely humans will use AI for malicious purposes rather than it going rogue itself.

It’s become a cliché metaphor, but people are still concerned a self-thinking killer AI like SkyNet from the film Terminator will be created.

McAfee CTO Steve Grobman spoke at this year’s RSA conference in San Francisco and warned the wrong humans in control of powerful AIs are his company’s primary concern.

To provide an example of how AIs could be used for good or bad purposes, Grobman handed over to McAfee Chief Data Scientist Dr Celeste Fralick.

Fralick explained how McAfee has attempted to predict crime in San Francisco using historic data combined with a machine learning model. The AI recommends where police could be deployed to have the best chance of apprehending criminals.

Most law-abiding citizens would agree this is a positive use of AI. However, in the hands of criminals it could be used to pinpoint where to commit a crime and have the best chance of avoiding capture.

In another demo at the conference, Fralick showed a video where his words were being spoken by Grobman in an example of a ‘DeepFake’.

“I used freely available, recorded public comments by you to create and train a machine learning model that let me develop a deepfake video with my words coming out of your mouth,” Fralick explained. “It just shows one way that AI and machine learning can be used to create massive chaos.

DeepFakes are opening up wide range of new threats including fraud through impersonation. Another is the potential for blackmail, with sexually-explicit fakes being threatened to be released to embarass an individual.

“We can’t allow fear to impede our progress, but it’s how we manage the innovation that is the real story,” Grobman concluded.

Sources:
1.https://artificialintelligence-news.com/2019/03/06/mcafee-keep-eye-humans-ais/

AI could help humans live forever

Reading Time: 3 minutesDuring a panel discussion on transhumanism at this year’s MWC, one expert predicted AI could figure out how to make a human live forever.

‘If You’re Under 50, You’ll Live Forever: Hello Transhumanism’ was the name of the session and featured Alex Rodriguez Vitello of the World Economic Forum and Stephen Dunne of Telefonica-owned innovation facility Alpha.

Transhumanism is the idea that humans can evolve beyond their current physical and mental limitations using technological advancements. In some ways, this is already happening.

Medical advancements have extended our lifespans and AI is helping to make further breakthroughs in areas such as cancer treatment.

CRISPR gene editing will one day help to eliminate disorders prior to birth. “You can eliminate cancer, muscular dystrophy, multiple sclerosis… all these things,” comments Vitello.

Artificial limbs will go beyond matching the abilities of natural body parts and provide things such as enhanced vision or superhuman strength beyond what even Arnie achieved in his prime.

These are exciting possibilities, but some transhumanist concepts are many years from becoming available. Even when they are, most enhancements will remain unaffordable for quite some time.

Cryogenics, the idea of being frozen to be revived years in the future, is one such example of something that’s possible today but unaffordable to most.

One concept is that we’ll be able to live forever virtually through storing a digital copy of our brains. American inventor and futurist Ray Kurzweil wants his brain to be downloaded and uploaded elsewhere when he dies.

“What’s more, he [Ray] has all these recordings of his father and he wants to take all of this information and put it on a computer brain to see if he can reproduce the essence of his father,” says Vitello.

This kind of thing requires the ability to emulate the brain. While huge strides in computing power are being made, we’re some way off from that level of processing power.

Even what conciousness is still eludes researchers. Only last year was a whole new neuron was discovered which goes to show how little we know about the brain at this point.

“The company I used to work for [Neurolectrics] has a project on measuring consciousness, but just the level of it,” Dunne continues. “We just don’t know how this stuff works at a very fundamental level.”

When asked how far along ‘the loading bar’ we are towards brain emulation, Dunne said he’d put it at somewhere around one percent. However, things such as stimulating the brain to improve memory retention or boost certain abilities he believes is a lot closer.

That isn’t without its own challenges. Dunne explains how it’s almost impossible for someone able-sighted to learn braille as not enough brain power is dedicated to the task.

“If you enhance one feature, you kind of have to take that processing power from somewhere else,” he says. “To learn braille you need to be blind as otherwise you’re using your visual cortex and there’s not enough computing power for the task.”

Dunne then goes on to note how AI could help to speed up breakthroughs that are difficult for us to comprehend today: “If we do invent artificial general intelligence, it might figure out all we need to know about the brain to do this within the next 30 years.”

AI is keeping the dream alive, but it seems unlikely that many – if any – under 50 will be living forever. At least we can look forward to some transhumanist enhancements in the coming years.

Sources:
1.https://artificialintelligence-news.com/2019/02/28/transhumanism-ai-how-humans-live-forever/

How Coca Cola uses AI

Reading Time: 4 minutesIf you sell hundreds of different products across multiple countries, perceptions and customer behaviour can vary greatly from market to market. Understanding these differences helps tailor specific messages for different markets, rather than relying on a one-size-fits-all approach.When you’re dealing with global brands, user data from social media or generated through your own systems (such as vending machines) is vast and messy. AI provides a viable method of structuring this data and drawing out insights.Computer vision technology such as image recognition tools can analyse millions of social media images to help a brand understand when, how and by whom its products are enjoyed.As well as making marketing decisions, brands that are fully invested in AI are beginning to use it for designing new products and services

As the world’s largest beverage company, Coca-Cola serves more than 1.9 billion drinks every day, across over 500 brands, including Diet Coke, Coke Zero, Fanta, Sprite, Dasani, Powerade, Schweppes and Minute Maid.Big data and artificial intelligence (AI) power everything that the business does.

The Coca Cola company is a shining example of a business which has re-ordered itself based on data and intelligence. It has long shown an appreciation of the fact that today’s technology offers unprecedented opportunity to reassess just about every aspect of how business is conducted. Rethinking itself as a technology driven company with a focus on strategic implementation of data and AI means it is likely to retain its place at the head of the pack for the foreseeable future.Artificial intelligence is driving many of the digital initiatives being pushed forward in today’s
industries, and Coca-Cola is using the technology to drive its own unique purchasing experiences.

The original Coca-Cola recipe was developed by John Pemberton as a medical solution to treat his own morphine addiction. Initially branded as French Wine Coca in 1885, the non-alcoholic version, Coca-Cola, was developed the following year. Originally sold as a patent medicine, Pemberton marketed Coca-Cola with claims it could cure ailments as wide-ranging as indigestion, nerve disorders, headaches, and impotence. From there, the brand grew exponentially – eventually dropping the medical pretensions and being marketed as a pure soft drink instead.
Today, Coca-Cola is one of the world’s largest most recognizable brands. The company employs over 100,000 people and has revenues of $41,863 million, placing it at #64 on the Fortune 500.

Marketing soft drinks around the world is not a “one-size-fits-all affair”. Coca-Cola products are marketed and sold in over 200 countries.In each of these markets there are local differences concerning flavours, sugar and calorie contents, marketing preferences and competitors faced by the brand.This means that to stay on top of the game in every territory, it must collect and analyse huge amounts of data from disparate sources to determine which of its 500 brands are likely to be well received. The taste of their most well-known brands will even differ from country to country, and understanding these local preferences is a hugely complex task.

Coca-Cola collects data on local drink preferences through the interfaces on its touch-screen vending machines – over 1 million of them are installed in Japan alone.To understand how its products are discussed and shared on social media, the company has set up 37 “social centers” to collect data and analyse it for insights using the Salesforce platform. The aim is to create more of the content that is shown to be effective at generating positive engagement. In the past, the process of creating this content was carried out by humans; however, the company has been actively looking at developing automated systems that will create adverts and social content informed by social data.It also uses image recognition technology to target users who share pictures on social media inferring that they could be potential customers.

In one example of this strategy in action, Coca-Cola targeted adverts for its Gold Peak brand of iced tea at those who posted images that suggested they enjoy iced tea, or in which the image recognition algorithms spotted logos of competing brands. Once the algorithms determined that specific individuals were likely to be fans of iced tea, and active social media users who shared images with their friends, the company knows that targeting these users with adverts is likely to be an efficient use of their advertising revenue.For purchase verification, off-the-shelf image recognition technology proved to be insufficient for reading the low-resolution dot matrix printing used to stamp product codes onto packaging. So, Coca-Cola worked to develop its own image recognition solution using Google’s TensorFlow technology.

Analysis of the data from vending machines by AI algorithms allows Coca-Cola to more accurately understand how the buying habits of its billions of customers varies across the globe.Computer vision analysis and natural language processing of social media posts, as well as deep learning-driven analysis of social engagement metrics, allows Coca-Cola to produce social advertising that is more likely to resonate with customers and drive sales of its products.
Applying TensorFlow to create convolutional neural networks enabled scanners to recognise product codes from a simple photograph, increasing customer engagement with Coca-Cola’s different loyalty programs around the world.The famous soft drink has been around for a long time, but one of the ways in which Coca-Cola is driving innovation today is by incorporating advanced artificial intelligence technology into its worldwide network of branded vending machines.Artificial intelligence is going to be at the forefront of all digital and omnichannel marketing in the future, and we can surely expect to see many more innovations like Coca-Cola’s vending machines moving forwards.

References:
1. https://artificialintelligence-news.com/2019/05/07/how-coca-cola-is-using-ai-to-stay-at-the-top-of-the-soft-drinks-market/
2.https://foodandbeverage.wbresearch.com/blog/coca-cola-artificial-intelligence-ai-omnichannel-strategy
3.https://bernardmarr.com/default.asp?contentID=1171
4.https://www.warc.com/newsandopinion/news/how_cocacola_does_ai/41582

How Coca Cola is Leveraging Machine Learning in the Hyper-competitive CPG Industry