Tag Archives: Artificial Intelligence

These AI company founders swear they are not racist – but is it truly so?

Reading Time: 3 minutes

We’re approaching 2023. We can fly across the world in a matter of hours. We can communicate in real-time with anyone, anywhere on Earth. And now, we can make anyone sound like a white American.

As wild as it may seem, that is exactly what Sanas’ AI development does – the algorithm takes anyone’s voice as an input, and (with minimal delay) puts out a slightly robotic voice of the ‘standard’ English speaker – colloquially known as a white, educated U.S. citizen’s voice.

Why would anyone even use it?

As usual, the answer is money.

Sanas was designed to help in offshoring call-centers. It is twice as cheap to hire a worker from countries like Pakistan or India as it is to hire them from in the United States.[1] The company operates on the assumption that all the callers must do is read a script and carefully follow sales/customer handling instructions. 

Sanas’ president Marty Sarim stated that ‘We don’t want to say that accents are a problem because you have one, they’re only a problem because they cause bias and they cause misunderstandings.’[2] Nevertheless, the company has been flooded with accusations of perpetuating racial stereotypes and reinforcing racial bias. 

For one, Nakeema Stefflbauer (AI and tech angel investor, CEO of women-led computer programming group FrauenLoop) described Sanas’ mission as a form of ‘digital whitening.’[3] She believes that the company doesn’t emphasize comprehension as much as it emphasizes comfort – for those who do not want to understand, empathize, or acknowledge individuals of different backgrounds, and, as a result, with different accents. 

There has also been outrage among the company’s target group: Mia Shah-Dand (the founder of Women in AI Ethics; an immigrant from India with a non-American accent) found the company’s goal ‘very triggering’. She slammed Sanas for trying to discard people’s uniqueness and for propagating the message that they’re ‘not good enough’.3

Three of Sanas’ four founders (Shawn Zhang, Maxim Serebryakov and Andrés Pérez Soderi), who met while studying at Stanford
source: https://edition.cnn.com/2021/12/19/us/sanas-accent-translation-cec/index.html

Naturally, Sanas’ board and founders have addressed these claims: 

90% of their employees, and all 4 of the founders are immigrants.1 Additionally, two of the board members – Massih Sarimad and Sharath Keysheva Narayana – have previously worked in call centers and witnessed racial abuse in the workplace first handedly.[4]

Sanas’ product is allegedly designed to be operated only by the call center worker – so that only they can switch the program on or off and have full autonomy in deciding whether they want their accent to be translated. While this seems like a rational idea, it is highly unlikely that it would actually be implemented – after all, call centers are very formalized structures with specific instructions. Thus, it is doubtful that managers would opt for their workers to have such high degree of ‘freedom’.

So far, Sanas has raised over 37 million USD in investments,[5] and has ambitious plans. Their aim is to introduce many more accents into their technology, to allow for seamless communication, as if everyone were your local. The company also plans for expansion into the entertainment industry – Maxim Serebryakov (the CEO of Sanas) said that “There are also creative use cases such as those in entertainment and media where producers can make their films and programs understandable in different parts of the world by matching accents to localities”. 

So, what’s the verdict?

Personally, I don’t believe that Sanas’ operations are inherently racist – though it might seem quite questionable at first. Considering the fact that most members of their team are immigrant, they are the ones to truly understand the pain of being racially discriminated against for their accents or appearance. If this is a solution that will allow for the reduction of racist incidents and decrease of stress among call center employees, then so be it. It is worth noting that things can turn upside down quickly, should Sanas enable an option for call center managers to force the usage of this technology on their employees. 

Should things pan out the way Maxim Serebryakov and the rest of the board say, Sanas could be a powerful tool for mitigating racist remarks and for optimizing costs and performance of call center outsourcing. The only way to find out is to wait and see.

Would you be interested in trying Sanas? Do you think having an American/standard English accent on online calls and meetings would help you in your career?

Until next time,

Jan


[1] https://www.worldwidecallcenters.com/call-center-pricing/

[2] https://www.sfgate.com/news/article/sanas-startup-creates-american-voice-17382771.php

[3] https://www.insider.com/ai-startup-sanas-accent-translation-technology-call-center-racism-2022-9

[4] https://spidersweb.pl/2022/10/sanas-startup-hindusi-call-center.html

[5] https://www.crunchbase.com/organization/sanas

Tagged

A digital weapon

Reading Time: 3 minutes

Have you ever been thinking about the reason why social media was created? Why did people become so engaged in it extremely quickly? Sometimes I am wondering whether the emergence of social media was exactly a step toward social development or in reality it initiated an inevitable threat to the whole population. 

Let’s consider Facebook – the most massive social network in the world that gave birth to an unbelievable number of internet communities. Once Facebook introduced its mission statement – the goal to provide thousands of people with the opportunity to stay connected with the world, users got accustomed to it immediately.  But the mission of Facebook evaporated as quickly as people became obsessed with this social network. Facebook produced obsession. By using and using it again, people developed a habit of scrolling their feeds every day,  that was a certain form of a drug for them. And while having been totally absorbed in the news feed, people did not notice one crucial thing – their behavior, consciousness and emotions were deliberately manipulated. It is not hard to guess that the primary purpose of Facebook’s creation was to control people. Social networks are existing with the strict mission of keeping people’s desires, preferences, intentions, goals, habits and even fears under control. They know us better than anyone else, playing the role of our personal diary. Have you ever noticed that people are much braver to express their opinion particularly via social media, since they feel the freedom for their thoughts to be heard and not punished? Data analysts of Facebook know it for sure. They are able to detect and sort individuals accordingly to their temperament and character features as a result of surveilling and analysing their requests, comments, messages and contributions to society.  With the help of Artificial Intelligence systems, developed by Facebook’s data analysts, every user’s individual profile is created to adjust their behavior by producing posts and advertisements, that perfectly fit their desires and interests. People get imprisoned in social networks without comprehending it and are gradually turning into robots who are guided to buy certain products and services or to read certain pieces of news intended especially for them. However, consumer and social behavior is not the only thing, being under the control and manipulation of AI. Artificial intelligence systems significantly affect our morals, preconceptions and values. 

I would like to elaborate on the scandal of Facebook with the Cambridge Analytica campaign. In 2015 data of numerous Facebook users were collected without their consent by the data analytics firm in order to create a system that affects people’s choices regarding the presidential election. By spreading perfectly-designed targeted polls, advertisements and posts, Cambridge Analytica managed to influence the consciousness of voters and change their opinion about the election. An extremely similar situation is evolving nowadays in Russia, where citizens are experiencing severe brainwashing caused by the flow of posts and news, developed by the Russian government, which eventually leads to the inability of Russians to consciously perceive information about the war in Ukraine. Because of extreme manipulation tools they are blind to the hazardous nightmare that is happening to millions of innocent Ukrainian people. Unfortunately, these examples of manipulation caused by social media are limitless. Social media users are doomed to be the victims of various types of propaganda, promoted by social networks. 

So what is the real purpose of social media existence? Social media is an efficiently-designed and developed digital weapon. By applying Artificial Intelligence , workers of Facebook, Instagram or Twitter are capable of influencing the consciousness and actions of millions of individuals. With the help of Artificial Intelligence systems, social media transforms people into robots, who unintentionally do things they do not want to. It exterminates the autonomy of people’s opinions and mindsets. So is AI really a step toward our world’s development or is it the beginning of humanity’s “death”, the vanishing of people’s independence and freedom? Maybe AI was intended not for creating robots, but for turning people into them? Maybe its most crucial goal was not to alleviate human life but to simplify the control over people, the control of their thoughts and actions? We will probably see it. Or not, because we won’t be able to figure it out, being already under control. 

References: 

https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

https://www.linkedin.com/pulse/how-facebook-using-artificial-intelligence-bernard-marr

https://www.netflix.com/pl-en/title/81254224

Tagged ,

Artificial Intelligence in aviation industry

Reading Time: 3 minutes

Since the first commercial flight which was in 1919 a lot has changed. Initially all of the things associated with flying were planned and done manually. Now when AI exists many of those tasks can be done automatically which results in improvement of both passengers and the air traffic flow. This technology can be also used in supporting ground services.

Why airlines and airports collect so much data?

Nowadays a lot of data is gathered in the aviation industry. It’s not only about technical aspects of planes and traffic management but also about passengers satisfaction. As we know customers in this case travellers are in this industry are the most important factor, because of how much money they spend every month or a year on flying. That’s why airlines and airports are now putting pressure on collecting feedback from passengers to improve their services.

How airlines use AI and Data Science to improve services?

With so much data gathered airlines can introduce it to various AI systems to get the work done for them, which is faster more accurate and definitely more convenient. There are many ways of using AI but let me focus on 4 that are in my opinion the most interesting to know about.

1.Predictive maintenance

For a long time airlines are struggling with a problem of flight delays or cancellations due to unplanned maintenance works. If something like that happens they are obliged to pay compensations for passengers that are stuck in airports. That’s why they have introduced predictive AI system which helps to monitor the health of specific parts in an aircraft. It also suggests mechanicians when such a part will need to be changed and how long it takes to do so. What’s more predictive algorithms can predict potential failures on a plane before they actually happen. With this predictive maintenance applied airline can reduce it expenses significantly. However AI can only assists in the maintenance process, but it won’t replace essential aircraft inspections.

2.In-airport self-service

In this area Covid 19 has changed a lot. Pandemic has speeded up the development process of self-service devices such as self-check-in machines. But that were the first steps in making a passengers trip automated. Today, airlines provide solutions that allow passengers to travel with using self-use devices from checking to boarding. For example during security check technology analysis your face and comperes it with the scan of your document that you need to put on a scanner. What’s more it also checks if you aren’t chased by a police. All those technologies are made to reduce the check in time and to improve passengers satisfaction. However traditional ways will still exist.

3.Flight and fuel optimalization

As you already know flight delays might be very costly for airlines, that’s why there are doing everything to avoid such situations. It’s not easy to plan and schedule everything considering all aspects connected with flight such as weather conditions and air traffic flow. Now AI analyses real time data such as traffic flow, weather conditions, turbulence and many other variables so that dispatchers can make better decisions. It is also meant to find optimal flight routes to decrease the amount of emission of carbon dioxide and also to reduce the fuel consumption.

4.Dynamic ticket pricing

If you ever booked a flight ticket you might have noticed that prices differ day after day. It differs because of many variables such as: departure time, destination, flight distance, and the number of available seats. The cost of the same ticket can change minute by minute. That’s because of airlines use AI system which is called dynamic pricing. It’s a technique of setting new prices so that they are the most profitable, unfortunately only for airlines.

Video

To sum up shortly AI is a tool that helps airlines to reduce their costs and improve their services. However it won’t always be the main way of dealing with problems in some aspects of aviation. Personally i think it enchances the way of traveling for passengers.

Let me know what are your thoughts. If you want to find out more about AI use in aviation industry i recommend you watching this video from altexsoft.

Sources:

altexsoft: https://www.altexsoft.com/blog/engineering/ai-airlines/

addepto: https://addepto.com/blog/fly-to-the-sky-with-ai-how-is-artificial-intelligence-used-in-aviation/

photo: https://www.topaviationsites.net/news/how-to-finance-an-aviation-business/

Tagged ,

AI in medicine: the device with the help of the retina of the eye can predict the risk of heart disease

Reading Time: 2 minutes

Artificial intelligence has made our lives much easier since its inception. In the real world, everything is so connected with new technologies that it is simply impossible to imagine life without AI. And without any doubt, the contribution of technology to the field of medicine is the most important.

So, the issue of my talk, as it is easy to guess by the heading, is the contribution of AI to medicine.

A specially developed device in the future will be able to detect and predict the risks of heart disease. According to the British Journal of Ophthalmology,  it soon will be quite possible to produce special cardiovascular screenings. In just 1 minute patients will be able to learn about the risks of heart disease. There will be no need to conduct any additional blood tests and similar frauds. Everything will be reduced to the greatest efficiency and the minimum time. This is how artificial intelligence works!

Personally, I think it’s a very cool idea. Firstly, nowadays cardiovascular diseases are very common. This is a real problem not only for older people. In addition, that’s a common fact that after the Covid-19 many people began to have heart problems. It turns out that there have become more issues concerning health.

The task of scientists is to save human lives. In many cases, medical care for late detection of the disease is already ineffective. And nothing can be done! In addition, high-quality medical care is an expensive thing. As a result, poor people do not get the necessary medical examinations. A very good thing about this new tool is that the inventors talk about its cheapness. I believe that quality medicine should be available to all segments of society, regardless of wealth.

The tool studies the blood vessels that are located in the retina of the eye. The tool takes into account various changes, makes all kinds of measurements. It turns out that blood vessels in retina and heart are closely connected, which provides a special opportunity to determine the state of the heart and even predict all sorts of risks through the retina.

Of course, this is very convenient. Any analyses require time, specified conditions, reagents or equipment. It seems to me that using this software will require less resources and human’s participation.

After all, I think it’s a wonderful invention. So far, of course, this is just an idea, confirmed by several studies. It will be quite difficult to transfer the code to real medicine. But still, artificial intelligence plays for the benefit of humanity and this is wonderful.

Sources:

AI eye checks can predict heart disease risk in less than minute, finds study | Heart disease | The Guardian

AI can identify heart disease from an eye scan | University of Leeds

Tagged , ,

Tesla’s new Optimus – will humanoids become an essential part of our households?

Reading Time: 2 minutes

In September 2021, Tesla announced that they are planning to build the company’s first humanoid robot and this September, on AI Day 2022 we could see the results coming to life. On the 20th of September 2022, Elon Musk presented the Optimus prototype – made of only Tesla-designed components.  

Before Optimus was presented, the company showed off a different robot, doing simple tasks like watering plants, carrying boxes, walking, and picking objects, however it was not clarified if the robot was moving on its own or was being operated remotely. It would be utterly impressive progress since last year, except the issue is that the humanoid prototype which Elon is planning to distribute is far behind the experimental robot they showcased, when it comes to abilities.  

The unveiled prototype is unable to even walk yet, however, Elon Musk declares that Optimus’s capabilities will be, ,,mind-blowing’’ in 5 to 10 years and aims to mass-produce them at a relatively low cost of less than 20 000$ to make the bots easily accessible to as many people as they can. 

All because Tesla wants to make these humanoids an essential part of peoples’ households, in aim to save them the time of doing mundane, everyday tasks. However a question arises – do people really need a humanoid to walk their dog and pick their groceries? And even if they do, why humanoids? It would be far easier to develop a household robot, that is not necessarily human-like shaped.  

Some sceptics also say that this is just another huge promise from Elon Musk, much like his autonomous cars. However considering Tesla’s recent focus on AI, it is probable that we will see a breakthrough in this technology in a couple of years.

Tesla is set to unveil its 'Optimus' humanoid robot this month. What should  we expect? | Euronews

Sources :

https://www.reuters.com/technology/elon-musk-set-showcase-teslas-humanoid-robot-after-delay-2022-09-30/

https://www.ft.com/video/51e4bfd4-9d50-43a7-afa1-53030dcf65fc

https://www.ft.com/content/ea6b6c12-9931-4e8f-9b9d-61e5c58fdfb8 

Tagged , ,

How could AI improve healthcare?

Reading Time: 3 minutes

Artificial Intelligence in today’s world is developing rapidly. It is expected that by 2025 AI systems market will reach 791.5 billion dollars in revenue. In my article I want to focus on how AI is affecting the world’s health care. And I’m not going to show you a super fancy robot that will replace every doctor and even perform surgeries, but something that is way more accessible to people around the globe. 

Ada Health is a free medical, symptom-checking app. It helps you check your symptoms and discover what might be causing them. With the help of Artificial Intelligence, Ada compares your case with thousands of medical documents and conditions to give you the most possible causes. The app is available in seven different languages: English, German, Portuguese, Spanish, French, Swahili, and Romanian. Swahili and Romanian were added thanks to funding from Fondation Botnar. It gives 119 million people more access to medical guidance. Ada has now over 12 million users and completed 28 million symptom checks.

How does a symptom check work?

The app is designed like a chatbot. First you have to start symptom assessment. The AI gets smarter the more you tell so you are asked a few simple questions (name, gender, date of birth and some personal questions about your health state). Then you can choose whether this assessment is for yourself or someone else.  From there you are searching for your symptoms and briefly describe it. If you don’t know any of the medical terms you have the ability to check it within the app (short explanation with a picture). When all the questions are finished Ada’s AI processes your answers and you get your report. Note that this is not a medical diagnosis, but only a suggestion what might be the cause and suggests what you could do. 

Ada is providing people with more information about their current state and suggest taking better health related actions. With the use of Swahili, it will be a game changer in developing countries of Africa, where people don’t have access to proper healthcare, or it is too expensive for them. The app will make them aware of their own health. 

Ada is doing a great job at what it is supposed to do, but there is still a problem with accessibility of healthcare. Governments in developing countries should work with initiatives like this one and develop a new healthcare system. In my opinion apps like Ada should be used to interview and diagnose. Then a person would only go for a quick examination and receive medical advice from a doctor. It could make the poor systems more efficient, and one doctor would serve more people at the time.

In the future the app should be developed to gain and use even more personal information about one’s health. It could involve congenital diseases, allergies, eating habits, sports activity, but also information gathered everyday thru your smart devices. It could be integrated into your personal medical system that guides you with every aspect of your health.

Thank you for your time. Let me know what you think about this project and could it actually improve healthcare?

Sources:

https://ada.com/about/

https://www.cnbc.com/2021/05/27/samsung-and-bayer-invest-in-ai-doctor-app-ada-health.html

https://www.idc.com/getdoc.jsp?containerId=US49571222

Tagged , , , ,

AI technology used to fake employees

Reading Time: 2 minutes

More and more companies use AI technology to put fake employees on their “about us” pages. According to the recent report, numerous firms are employing Generative Adversarial Network (GAN) software to create AI-generated images of fake employees.

informa

I found out about it through a Business Insider report by Evan Ratliff which showed many companies using this technique. My first reaction was pure shock. I kept wondering how many times I have been fooled by AI-generated images. What is more, I kept thinking about how many times I have used services of the companies using these techniques. But my next questions were why do the CEOs do this, and what is their point?

According to the Insider’s report, the point is making the company look bigger than it is. The owners believe an impression of a large workforce improves credibility. They are not wrong. Many researches state that customers tend to think that bigger companies are more professional than smaller ones. Therefore, they are more likely to use their services. I don’t agree with such unethical methods. In my opinion, a small company with as few as three employees can be often better than one with one hundred employees.

Secondly, when it comes to using specially designed AI employees, the CEOs can make the company look more diverse than it is. In other words, they put on “about us” pages people of color, when in reality 99% of their employees are white. Is it ethical? In my opinion, using different cultures and nationalities to boost the company’s performance is not ethical. It is inappropriate to use images of people of color to appear more friendly and open-minded, when these people not long ago suffered from increased unemployment, solely because of the color of their skin. If the company wants to be more diverse, the first thing to do is start employing people of different cultures and nationalities.

So now we all know that some of the employees on “about us” pages can be fake. But how would you react if I told you there is a company that faked all of its workforce? The website of Informa Systems, which has ties to the City of Austin Police Department, was covered with fake images. They faked the images of not only not-so-important employees, but also high-standing ones. They even faked an image of a chief marketing officer. As the research found, only one of the Informa Systems employees was real.

I think using AI to generate images of fake employees is harmful to the customers. This is customer manipulation, and it should be prohibited. Moreover, there should be high fines for such techniques, because they destroy the market.

What do you think? Let me know in the comments.

References:

https://www.businessinsider.com/ai-generated-images-fake-staff-appearing-on-companies-websites-2022-10?IR=T

https://smallbusiness.chron.com/advantages-large-business-21007.html

Tagged

Will AI replace programmers?

Reading Time: 2 minutes

“AI assistants are already here, so work that before required 10 devs will require 9, then 8, and so on…” – is the answer of one of the Redditors at r/artificial

Indeed, there are AI models developed by OpenAI and DeepMind that are capable of coding in many languages. However, many would argue that the job of programmers is too complex and too versatile to let machines automate all of the tasks within software development. However, AI already takes part in software development in a form of automatic code writing assistants, automatic bug fixing, or project delivery estimation. Thanks to the gigantic amount of code available online via platforms like GitHub, researchers can develop Deep Learning models for writing code. Such AI solutions generate code based on the programming task description provided by a user:

  1. OpenAI Codex – A general-purpose programming model with natural language understanding which makes it capable of writing code based on problem description as well as explaining code in natural language. It’s proficient in more than a dozen programming languages, Codex can interpret simple commands in natural language and execute them on the user’s behalf.
  2. DeepMind AlphaCode – DL model trained on over 700 gigabytes of code from GitHub repositories and tuned by creators for understanding problem statements, test cases, and submissions – correct and incorrect – from coding contests. When AlhpaCode is provided with a programming challenge, it generates thousands of possible solutions and filters the most efficiently working ones. When tested while Google’s coding contest, it achieved better results than 46% of participants.

Taking this into the account, I believe that solutions like the two above are just the beginning of AI in programming and will evolve into systems capable of creating end-to-end code for users. It will drastically influence the labor market of programmers and their everyday tasks. Researchers and practitioners claim that the work of programmers does not only consists of writing code, but also understanding the needs of stakeholders and adjusting code to them, working iteratively, etc. — that’s why AI will not completely replace programmers, but boost their effectiveness and let focus on most “human” aspects of programming.

In my opinion, the development of such solutions will slowly decrease the need for programmers in companies while most of the repetitive code tasks will be automated. However, creativity and innovativeness will be even more valued in programmers and the ones able to prove these two traits will be even more demanded.

Sources:

  1. https://www.reddit.com/r/artificial/comments/ojjg5i/will_ai_replace_programmers_is_a_question_that_i/

2. https://www.techslang.com/will-programmers-become-obsolete-because-of-ai/

3. https://techmonitor.ai/technology/ai-and-automation/deepmind-alphacode-ai-software-developer

4. https://openai.com/blog/openai-codex/

5. https://medium.com/geekculture/will-ai-replace-programmers-fb6fcfd70b37

Tagged

Collaborative community joins the race for the best AI

Reading Time: 2 minutes

Chatbots, online translators, text generators, and grammar checkers – these are a few of many modern applications of artificial intelligence language models. The most popular and advanced models like GPT-3 or BERT are claimed to “understand” human language. Unfortunately, these solutions are not open to the public. Why? Because mainly big corporations like Google, Microsoft or Huawei can afford to access enormous computing resources and have researchers to develop them.

?According to the official paper of GPT-3, it would take 355 years to train this AI model on a computer equipped with one of the most advanced graphics cards.

Open-source solution

To tackle the problem of the centralization of hyper-advanced AI models, hundreds of researchers from all around the world have gathered to bring an open-source solution to the public. Independent research and industrial volunteers formed a collaborative project called BigScience and created an enormous language AI which will complete its training in 4 months – anyone can track the progress in model training here: click!. The model is predicted to be excellent in performing various language-related tasks in 46 languages ?.

Implications of open-source

Currently, most advanced AI owned by hi-tech companies can be accessed by individuals, but one who wants to use them either needs to match specific conditions or pay for the access. One of the reasons is that the organizations that own AI solutions like GPT-3 still don’t know all of the potential misuses that could harm people and thus, want to control the purposes their solutions are used for. However, the new AI model created by BigScience collaborative community is most probably going to be open to anyone. It means that its use-cases will not be controlled.

Governments should join the collaboration

Hyper-advanced AI models can solve many problems and bring enormous value to society. In my opinion, collaborative projects like BigScience show that sooner or later, hyper-advanced AI solutions will be available to anyone. That’s the reason why I believe that governments should look into the problem of harmful applications of AI and establish a law to prevent such cases. Hopefully, that would encourage at least some hi-tech corporations to share their resources to help in democratizing advanced AI safely!

Sources:

https://huggingface.co/bigscience/tr11-176B-ml-logs

https://openai.com/blog/better-language-models/

https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling

https://ied.eu/wp-content/uploads/2018/05/sharing-economy.png

https://www.reddit.com/r/GPT3/comments/p1xf10/how_many_days_did_it_take_to_train_gpt3_is/

Tagged ,

Grammarly – a helping hand at improving your English grammar

Reading Time: 3 minutes

by Lev Hladush

Grammarly is both the name of San-Francisco based company and their main product – a communication assistant that helps correct grammar and typos in word processing to any internet user.

What is especially exciting about Grammarly is that the work of their assistant relies heavily on Artificial Intelligence. Thus making it a particular object of interest for us, students of Management and Artificial Intelligence program. Grammarly uses AI to help millions of people worldwide make their communication clear, effective and error-free. Everyone knows that communication is key to both personal and professional success and the mission of the company to improve lives by improving communication. The big vision behind it is to help people articulate their thoughts in a way that’s clear and effective, in a way that makes them understood as intended.

Core to this mission has been the work in natural language processing (NLP). They rely on their team’s deep expertise in NLP, machine learning (ML) and AI. The way it works is something like this:

Broadly speaking, an artificial intelligence system mimics the way a human would perform a task. AI systems achieve this through different techniques. Machine learning, for example, is a particular methodology of AI that involves teaching an algorithm to perform tasks by showing it lots of examples rather than by providing a series of rigidly predefined steps.

Grammarly’s AI system combines machine learning with a variety of natural language processing approaches. Human language has many levels at which it can be analyzed and processed: from characters and individual words through grammatical structures and sentences, even paragraphs or full texts. Natural language processing is a branch of AI that involves teaching machines to understand and process human language (English, for instance) and perform useful tasks, such as machine translation, sentiment analysis, essay scoring, and, in our case, writing enhancement.

An important part of building an AI system is training it. AIs are kind of like children in that way. Kids learn how to behave by watching the people around them and by positive or negative reinforcement. As with kids, if you want your AI system to grow up to be helpful and functional, you need to be careful about what you expose it to and how you intervene when it gets things wrong.

The first step is choosing high-quality training data for your system to learn from. In Grammarly’s case, that data may take the form of a text corpus—a huge collection of sentences that human researchers have organized and labeled in a way that AI algorithms can understand. If you want your AI to learn the patterns of proper comma usage, for example, you need to show it sentences with incorrect commas, so it can learn what a comma mistake looks like. And you need to show it sentences with good comma usage, so it learns how to fix comma mistakes when it finds them.

AI systems also need feedback from humans. When lots of users hit “ignore” on a particular suggestion, for example, Grammarly’s computational linguists and researchers make adjustments to the algorithms behind that suggestion to make it more accurate and helpful.

Just like people, AI does sometimes make errors. It’s especially possible when an AI is facing a situation it doesn’t have much experience with. Grammarly is trained on naturally written text, so it’s good at spotting issues that occur naturally when people write. It’s less good at handling sentences where mistakes have been deliberately inserted because they often don’t resemble naturally occurring mistakes.

Sources: Grammarly.com

Tagged , , ,