Tag Archives: Artificial Intelligence

AI technology used to fake employees

Reading Time: 2 minutes

More and more companies use AI technology to put fake employees on their “about us” pages. According to the recent report, numerous firms are employing Generative Adversarial Network (GAN) software to create AI-generated images of fake employees.


I found out about it through a Business Insider report by Evan Ratliff which showed many companies using this technique. My first reaction was pure shock. I kept wondering how many times I have been fooled by AI-generated images. What is more, I kept thinking about how many times I have used services of the companies using these techniques. But my next questions were why do the CEOs do this, and what is their point?

According to the Insider’s report, the point is making the company look bigger than it is. The owners believe an impression of a large workforce improves credibility. They are not wrong. Many researches state that customers tend to think that bigger companies are more professional than smaller ones. Therefore, they are more likely to use their services. I don’t agree with such unethical methods. In my opinion, a small company with as few as three employees can be often better than one with one hundred employees.

Secondly, when it comes to using specially designed AI employees, the CEOs can make the company look more diverse than it is. In other words, they put on “about us” pages people of color, when in reality 99% of their employees are white. Is it ethical? In my opinion, using different cultures and nationalities to boost the company’s performance is not ethical. It is inappropriate to use images of people of color to appear more friendly and open-minded, when these people not long ago suffered from increased unemployment, solely because of the color of their skin. If the company wants to be more diverse, the first thing to do is start employing people of different cultures and nationalities.

So now we all know that some of the employees on “about us” pages can be fake. But how would you react if I told you there is a company that faked all of its workforce? The website of Informa Systems, which has ties to the City of Austin Police Department, was covered with fake images. They faked the images of not only not-so-important employees, but also high-standing ones. They even faked an image of a chief marketing officer. As the research found, only one of the Informa Systems employees was real.

I think using AI to generate images of fake employees is harmful to the customers. This is customer manipulation, and it should be prohibited. Moreover, there should be high fines for such techniques, because they destroy the market.

What do you think? Let me know in the comments.





Will AI replace programmers?

Reading Time: 2 minutes

“AI assistants are already here, so work that before required 10 devs will require 9, then 8, and so on…” – is the answer of one of the Redditors at r/artificial

Indeed, there are AI models developed by OpenAI and DeepMind that are capable of coding in many languages. However, many would argue that the job of programmers is too complex and too versatile to let machines automate all of the tasks within software development. However, AI already takes part in software development in a form of automatic code writing assistants, automatic bug fixing, or project delivery estimation. Thanks to the gigantic amount of code available online via platforms like GitHub, researchers can develop Deep Learning models for writing code. Such AI solutions generate code based on the programming task description provided by a user:

  1. OpenAI Codex – A general-purpose programming model with natural language understanding which makes it capable of writing code based on problem description as well as explaining code in natural language. It’s proficient in more than a dozen programming languages, Codex can interpret simple commands in natural language and execute them on the user’s behalf.
  2. DeepMind AlphaCode – DL model trained on over 700 gigabytes of code from GitHub repositories and tuned by creators for understanding problem statements, test cases, and submissions – correct and incorrect – from coding contests. When AlhpaCode is provided with a programming challenge, it generates thousands of possible solutions and filters the most efficiently working ones. When tested while Google’s coding contest, it achieved better results than 46% of participants.

Taking this into the account, I believe that solutions like the two above are just the beginning of AI in programming and will evolve into systems capable of creating end-to-end code for users. It will drastically influence the labor market of programmers and their everyday tasks. Researchers and practitioners claim that the work of programmers does not only consists of writing code, but also understanding the needs of stakeholders and adjusting code to them, working iteratively, etc. — that’s why AI will not completely replace programmers, but boost their effectiveness and let focus on most “human” aspects of programming.

In my opinion, the development of such solutions will slowly decrease the need for programmers in companies while most of the repetitive code tasks will be automated. However, creativity and innovativeness will be even more valued in programmers and the ones able to prove these two traits will be even more demanded.


  1. https://www.reddit.com/r/artificial/comments/ojjg5i/will_ai_replace_programmers_is_a_question_that_i/

2. https://www.techslang.com/will-programmers-become-obsolete-because-of-ai/

3. https://techmonitor.ai/technology/ai-and-automation/deepmind-alphacode-ai-software-developer

4. https://openai.com/blog/openai-codex/

5. https://medium.com/geekculture/will-ai-replace-programmers-fb6fcfd70b37


Collaborative community joins the race for the best AI

Reading Time: 2 minutes

Chatbots, online translators, text generators, and grammar checkers – these are a few of many modern applications of artificial intelligence language models. The most popular and advanced models like GPT-3 or BERT are claimed to “understand” human language. Unfortunately, these solutions are not open to the public. Why? Because mainly big corporations like Google, Microsoft or Huawei can afford to access enormous computing resources and have researchers to develop them.

💡According to the official paper of GPT-3, it would take 355 years to train this AI model on a computer equipped with one of the most advanced graphics cards.

Open-source solution

To tackle the problem of the centralization of hyper-advanced AI models, hundreds of researchers from all around the world have gathered to bring an open-source solution to the public. Independent research and industrial volunteers formed a collaborative project called BigScience and created an enormous language AI which will complete its training in 4 months – anyone can track the progress in model training here: click!. The model is predicted to be excellent in performing various language-related tasks in 46 languages 🤯.

Implications of open-source

Currently, most advanced AI owned by hi-tech companies can be accessed by individuals, but one who wants to use them either needs to match specific conditions or pay for the access. One of the reasons is that the organizations that own AI solutions like GPT-3 still don’t know all of the potential misuses that could harm people and thus, want to control the purposes their solutions are used for. However, the new AI model created by BigScience collaborative community is most probably going to be open to anyone. It means that its use-cases will not be controlled.

Governments should join the collaboration

Hyper-advanced AI models can solve many problems and bring enormous value to society. In my opinion, collaborative projects like BigScience show that sooner or later, hyper-advanced AI solutions will be available to anyone. That’s the reason why I believe that governments should look into the problem of harmful applications of AI and establish a law to prevent such cases. Hopefully, that would encourage at least some hi-tech corporations to share their resources to help in democratizing advanced AI safely!







Tagged ,

Grammarly – a helping hand at improving your English grammar

Reading Time: 3 minutes

by Lev Hladush

Grammarly is both the name of San-Francisco based company and their main product – a communication assistant that helps correct grammar and typos in word processing to any internet user.

What is especially exciting about Grammarly is that the work of their assistant relies heavily on Artificial Intelligence. Thus making it a particular object of interest for us, students of Management and Artificial Intelligence program. Grammarly uses AI to help millions of people worldwide make their communication clear, effective and error-free. Everyone knows that communication is key to both personal and professional success and the mission of the company to improve lives by improving communication. The big vision behind it is to help people articulate their thoughts in a way that’s clear and effective, in a way that makes them understood as intended.

Core to this mission has been the work in natural language processing (NLP). They rely on their team’s deep expertise in NLP, machine learning (ML) and AI. The way it works is something like this:

Broadly speaking, an artificial intelligence system mimics the way a human would perform a task. AI systems achieve this through different techniques. Machine learning, for example, is a particular methodology of AI that involves teaching an algorithm to perform tasks by showing it lots of examples rather than by providing a series of rigidly predefined steps.

Grammarly’s AI system combines machine learning with a variety of natural language processing approaches. Human language has many levels at which it can be analyzed and processed: from characters and individual words through grammatical structures and sentences, even paragraphs or full texts. Natural language processing is a branch of AI that involves teaching machines to understand and process human language (English, for instance) and perform useful tasks, such as machine translation, sentiment analysis, essay scoring, and, in our case, writing enhancement.

An important part of building an AI system is training it. AIs are kind of like children in that way. Kids learn how to behave by watching the people around them and by positive or negative reinforcement. As with kids, if you want your AI system to grow up to be helpful and functional, you need to be careful about what you expose it to and how you intervene when it gets things wrong.

The first step is choosing high-quality training data for your system to learn from. In Grammarly’s case, that data may take the form of a text corpus—a huge collection of sentences that human researchers have organized and labeled in a way that AI algorithms can understand. If you want your AI to learn the patterns of proper comma usage, for example, you need to show it sentences with incorrect commas, so it can learn what a comma mistake looks like. And you need to show it sentences with good comma usage, so it learns how to fix comma mistakes when it finds them.

AI systems also need feedback from humans. When lots of users hit “ignore” on a particular suggestion, for example, Grammarly’s computational linguists and researchers make adjustments to the algorithms behind that suggestion to make it more accurate and helpful.

Just like people, AI does sometimes make errors. It’s especially possible when an AI is facing a situation it doesn’t have much experience with. Grammarly is trained on naturally written text, so it’s good at spotting issues that occur naturally when people write. It’s less good at handling sentences where mistakes have been deliberately inserted because they often don’t resemble naturally occurring mistakes.

Sources: Grammarly.com

Tagged , , ,

Will artificial intelligence replace writers and musicians?

Reading Time: 2 minutes

According to a report by Ericsson Connected Intelligent Machines, 20% of consumers prefer AI-driven content to human-made content. The results show that the creative race between people and machines is currently underway. However, it is too early to announce the winners, because still, every fifth respondent prefers content created in a traditional way.

Consumers predict that mass media will be increasingly influenced by automation by 2030. In fact, the future may be closer than we think. Today, even the most basic AI language generators are proving to be good enough content creators on social media platforms.

The future of content creation may lie in human-machine collaboration. One interesting area where this is already happening at the mass market level is in science fiction literature. Famed Chinese SF author Chen Qiufan, competing with writers such as Nobel Prize winner Mo Yan, recently won a literary competition in Shanghai with his short story “The State of Trance” which featured AI-generated passages.

Will Artificial Intelligence reach the film or music area?

A report from Ericsson found that consumers consider film and music to be the domain of human creativity. Six in ten respondents say they would prefer human film producers to AI counterparts. However, most of this group of respondents are apparently unaware that AI is already being used in the film industry to support human decision-making.

Currently, consumers are still more likely to choose humans as music makers, 65% of respondents believe. However, the Connected Intelligent Machines report also found that six in 10 of us believe that “artificial musicians” will be able to surpass humans on the charts by 2030.

https://bit.ly/3rYuELe – Photo 1
https://bit.ly/2PH1mDV – Ericsson’s report

Tagged , ,

AI learns to generate images from text and begins to better understand our world

Reading Time: 2 minutes 

OpenAI, co-founded by Elon Musk, has created the world’s most stunning AI model to date. GPT-3 (Generative Pre-trained Transformer 3) without any special prompts, can compose poems, short stories and songs, making one think that these are the work of a real person. But eloquence is just a gimmick, not to be confused with a human understanding of the environment. But what if the same technologies were trained simultaneously on text and images?

Researchers from the Paul Allen Institute for Artificial Intelligence have created a special, visual-linguistic model. It works with text and images and can generate pictures from text. The pictures look disturbing and strange, not at all like the hyperrealistic “deepfakes” created by generative adversarial networks (GANs). However, this capability has long been an important missing piece.

The aim of the study was to reveal whether neural networks can understand the visual world as humans.  For example a child who has learned a word for an object can not only name it, but also draw the object according to the hint, even if the object itself is absent from his point of view. So the AI2 project team suggested the models do the same: generate images from captions.

The final images created by the model are not entirely realistic upon close inspection. But it is not important. They contain the correct high-level visual concepts. AI simply draws the way a person who cannot draw would draw on paper.

This makes sense: converting text to an image is more difficult than doing the opposite.

“A caption doesn’t specify everything contained in an image,” says Ani Kembhavi, AI2’s computer vision team leader.

Creating an image from text is simply a transformation from smaller to larger. And it’s hard enough for the human mind, apart from programs.  If a model is asked to draw a “giraffe walking along a road,” then it needs to conclude that the road will be gray rather than bright pink, and will pass next to a field rather than the sea. Although all this is not obvious to AI.

Sample images generated by the AI2 model from captions. Source: AI2

This stage of the research shows that neural networks are capable of creating abstractions – a fundamental skill for understanding our world.

In the future, this technology will allow robots to see our world as well as humans, which will open up a huge scope of possibilities. The better the robot understands the environment and uses language to communicate, the more complex tasks it will be able to perform. In the current perspective, programmers can better understand the aspects of machine learning

“Image generation has really been a missing puzzle piece, By enabling this, we can make the model learn better representations to represent the world.”





Tagged , , ,

Samsung’s NEON digital avatars shouted as artificial humans

Reading Time: 4 minutesThe fact that Samsung will appear with its new project at CES 2020 has been loud for a long time. Everyone was wondering what artificial humans could be. And one thing is certain. After all the media noise around the project, everyone expected something completely different. Especially after prematurely disclosed material, which can be watched below.


What exactly is this project about?

NEON is the idea of Samsung researcher Pranav Mistry. The project emerged out of STAR Labs – Samsung Technology and Advanced Research Labs – and is funded by Samsung, but it’s not actually a Samsung company.

The NEON project is realistic human avatars that are computationally generated and can interact with people in real-time. At this point, each NEON is created from footage of an actual person that is fed into a machine-learning model. A Neon is meant to mimic real human appearance and emotions with its own personality and aptitude to behave like humans. Avatars can also remember and learn.

According to Pranav Mistry, NEON isn’t meant to replace Samsung’s digital assistant Bixby. What is more, it won’t be implemented in Samsung products and NEON operates independently.


Examples of the NEON’s application

Each NEON avatar can be customized for different tasks and is able to respond to queries with the latency of less than a few milliseconds. They’re not intended to be just visual skins for AI assistants but put to more varying uses instead. If we are to believe STAR Labs CEO Pranav Mistry, in the near future everyone will be able to license or subscribe to a NEON. The roles can be different: a service representative, a financial advisor, a healthcare provider, or a concierge. The founder also assures that NEONs will work as TV anchors, spokespeople, or movie actors. They can simply be companions and friends if people only would want it.

The first wave of Neons are modeled after real people.
Source: https://www.neon.life/


NEONs will work as TV anchors, spokespeople, or movie actors.
Source: https://www.neon.life/


What technology is behind it?

There are two main technologies on which NEON is based. The first is Core R3, which stands for reality, real-time and responsiveness. Core R3 is a kind of the graphics engine that powers avatars natural movements, expressions and speech. The second technology is Spectra, which is responsible for implementing artificial intelligence solutions. By this, the creator means intelligence, learning, emotions, and memory. Spectra is not ready for launch yet, but the company says it will present the technology later this year. At the moment it is still being developed.

Neon’s Core R3 graphics engine demonstrated at CES 2020.
Source: https://www.cnet.com/news/samsung-neon-artificial-humans-are-confusing-everyone-we-set-record-straight/


What about the uncanny valley?

When NEON avatars can become real comrades in everyday life, one should ask oneself whether the fact that they are so realistic is not a problem. This is specifically about the phenomenon of the uncanny valley, the scientific hypothesis telling that a robot that looks or functions like a human being causes the observers to feel unpleasant or even disgusting. When some people are wondering how STAR Labs has worked out every detail, others feel at least uncomfortable.


Why is everyone disappointed?

NEON is like a new kind of life, There are millions of species on our planet and we hope to add one more – this is what we heard from STAR Labs CEO Pranav Mistry before the CES 2020 presentation. It is no wonder that nobody got into awe when it turned out that NEON is just a highly detailed digital avatar. In addition, the demo presented at the show was fully controlled by people from STAR Labs. All the media hype made everyone wait impatiently for the show to finally find out that NEON still has a lot of work to do on its business.

It remains to not believe the haters because NEON avatars look really good and the potential of the project is certainly there. Thus, the final version of the STAR Labs venture has not come and we shouldn’t believe all the media reports. It will soon be clear whether a company can combine two ambitious technologies – the avatars and the AI – together.


Do you see a practical application of Samsung’s NEON in the near future? Would you feel comfortable if your teacher wasn’t a real person but Samsung’s NEON?



[1] https://www.theverge.com/2020/1/7/21051390/samsung-artificial-human-neon-digital-avatar-project-star-labs

[2] https://www.theverge.com/2020/1/8/21056424/neon-ceo-artificial-humans-samsung-ai-ces-2020

[3] https://www.engadget.com/2020/01/05/samsung-neon-artificial-human-teaser/

[4] https://www.cnbc.com/2020/01/06/samsung-neon-artificial-human-announced-at-ces-2020.html

[5] https://www.cnet.com/news/samsung-neon-project-finally-unveiled-humanoid-ai-chatbot-artificial-humans/

[6] https://www.cnet.com/news/samsung-neon-heres-when-well-get-details-on-the-mysterious-ai/

[7] https://economictimes.indiatimes.com/magazines/panache/meet-neon-samsungs-new-ai-powered-robot-which-can-converse-sympathise/articleshow/73135240.cms

[8] https://www.livemint.com/companies/people/we-ll-live-in-a-world-where-machines-become-humane-pranav-mistry-11577124133419.html

[9] https://mashable.com/article/samsung-star-labs-neon-ces/?europe=true

[10] https://www.wired.co.uk/article/samsung-neon-digital-avatars

Tagged , , , , , , , , , ,

DeepL – a translator which surpassed Google Translate

Reading Time: 4 minutesA company doesn’t have to be a technological giant to create a product that exceeds the most popular programs of the same type. There is no doubt that in the world of automatic translation Google, Microsoft, and Facebook are the leaders. And yet it turns out that a small company DeepL has created a translator that sometimes exceeds the quality of the most popular programs of this type.

DeepL logo
Source: https://www.deepl.com/home


How DeepL was created?

It turns out that the key to the development of the translation service was to use the first own product, which is Linguee, a translation search engine on the Internet. The data obtained in this way became training material for artificial intelligence behind DeepL.

Interestingly, Linguee’s co-founder, Gereon Frahling, once worked for Google Research but left in 2007 to continue his new venture.

Currently, DeepL supports 42 language combinations between Polish, English, German, French, Spanish, Italian and Dutch. Already now, artificial intelligence is learning more, such as Mandarin, Japanese and Russian. There are plans to introduce an API, by means of which it will be possible to develop new products and implement the mechanism in other services.

The team has been working with machine learning for years, for tasks bordering on basic translation, but finally, they began a fervent work on a completely new system and a company called DeepL.


What is the advantage of DeepL?

Once again, people realized that AI is learning all the time – to the benefit of consumers, of course. The artificial intelligence behind the DeepL not only accurately recognizes words and selects translations, but is also able to understand certain linguistic nuances, perfectly copes with changed sentence patterns, which makes the result of a user’s inquiry extremely natural – as if it was written by a human being.

The company also has its own supercomputer, which is located in Iceland and operates at 5.1 petaflops. According to press releases with such equipment DeepL is ranked 23rd in the Top 500 supercomputers worldwide.


The statistics do not lie

The blind test compared the new product and solutions from Google, Facebook, and Microsoft. Professional translators were supposed to choose the best results of the mechanisms in the comparison without knowing the author of the translations:

DeepL’s blind testing results
Source: https://techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning/


But that’s not all, because in the BLEU results DeepL also gets great scores. BLEU is an algorithm for evaluating the quality of translation.


Why do others recommend DeepL instead of Google Translate?

The main advantage of DeepL in the context of Google Translate is much better knowledge (or rather a detection) of idioms, phrases, and phraseological compounds. Where, for example, Google Translate is weakening and literal meaning is being found, DeepL can surprisingly offer a more nuanced and much more specific language solution. The translation is not a literal translation of the text, but one that best harmonizes with the contexts and connotations characteristic of the words.

The passage from a German news article rendered by DeepL
Source: https://techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning/

The passage from a German news article rendered by Google Translate
Source: https://techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning/


No wonder that DeepL is gaining recognition all over the world. Here are some reviews:

Thanks to more French-sounding phrases DeepL has also surpassed other services.Le Monde, France

In the first test, from English to Italian, it was very accurate. In particular, he understood the meaning of the sentence well, instead of being stunned by the literal translation.La Repubblica, Italy

DeepL from Germany surpasses Google Translate. A short WIRED test shows that the results of DeepL are by no means worse than those of its best competitors, and in many cases even surpass them. Translated texts are often much more fluid; where Google Translate creates completely meaningless word strings, DeepL can at least guess the connection.WIRED.de, Germany

We were impressed with how artificial intelligence selects the translations and how the results of its work look afterward. Personally, I had the impression that on the other side sits a man who on speed translates.Antyweb, Poland


The DeepL tool has been made available to a wider audience – for free in the form of a website.

Now it is only a matter of waiting for DeepL to advertise its tool, because although it does not have a large language base, at first glance the accuracy of the translations definitely exceeds the most popular tools of this type.

It’s worth watching how the product will develop further as the current achievements of DeepL are really promising.

Did any of you choose DeepL instead of Google Translate?



[1] https://techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning/

[2] https://www.deepl.com/blog/20180305.html

[3] https://www.dw.com/en/deepl-cologne-based-startup-outperforms-google-translate/a-46581948

[4] https://www.forbes.com/sites/samanthabaker1/2019/06/27/will-this-german-startup-win-the-translation-game/

[5] https://www.deutsche-startups.de/2018/07/05/deepl-koelner-uebersetzungskoenig-macht-millionengewinn/

[6] https://www.forbesdach.com/artikel/davids-erbe-und-igels-strategie.html

[7] https://www.letemps.ch/societe/deepl-meilleur-traducteur-automatique

Tagged , , , , , , ,


Reading Time: 2 minutes

If you’re familiar with sci-fi anthology series called Black Mirror, you might think of one of the episodes from the 5th season titled “Metalhead”. Apparently, it’s not a fiction anymore, it’s our today’s reality.

The robot dog named Spot is an invention that Boston Dynamics started first developing out of MIT. According to the state’s nonprofit branch called American Civil Libertie Union, these robots are now working with the Massachusetts State Police’s bomb squad.
The ACLU  accessed a memo of agreement document between the state and Boston Dynamics through a public records request.
The request letter that the organization wrote in the records is the following: “The ACLU is interested in this subject and seeks to learn more about how your agency uses or has contemplated using robotics.”.
The ACLU collected all the valuable information about the new partnership, including that Boston Dynamics leased the Spot robot dog to the police force for 90 days between August and November. Because there is no detailed information revealed for a public eye, we don’t know how they are exactly using these machines. The only information that a state police spokesman David Procopio provided about Spot is:  “for the purpose of evaluating the robot’s capabilities in law enforcement applications, particularly remote inspection of potentially dangerous environments.”.
Michael Perry, Boston Dynamics vice president of business development stated, that the company is aiming to make Spot useful for different areas like oil and gas companies, constructions or entertainment.
Perry said he anticipates, that the police is using Spot by sending it into areas that are too dangerous for human being.

The abovementioned robot dogs are constructed for a general-purpose. They have an open application programming interface, which means, a warehouse operator or in this case, a police department can customize them with their own software. From what we can read on the internet, State police claims they didn’t use that feature yet.
Even though, Perry claims the robot won’t be used in the way that would harm or intimidate people, the ACLU, as well as the internet community, are worried about the situation. Currently, the major issue is the lack of transparency in the overall robotics program.

There are various conspiracy theories made by netizens. They mostly predict worst-case scenarios.
The question is whether this invention is safe for the human race. But let’s face the truth. Everything could be dangerous if used in the wrong way. If people working on these machines will program the algorithm allowing them to shoot to people, they’ll follow the order.
Personally, I’m amazed and don’t really know which adjective to use other than “amazing” in this case. I applaud Boston Dynamics for creating the algorithms of their breathtaking machines.

Continue reading
Tagged , , , , , , ,

Neuralink – a way of merging Artificial Intelligence with human brain?

Reading Time: < 1 minute

As from 2016, Elon Musk started a new company called “Neuralink”, having the threat of AI taking over the world he created the mission of merging it all. Artificial Intelligence and humans brain. What makes it possible is an almost non-invasive surgical operation, by a calibrated robot which implants threads as tiny as 4 μm to 6 μm in width between blood vessels in our brain. For comparison, human hair has 75 μm.Znalezione obrazy dla zapytania neuralink



How does it work? In shortcut, these threads would be used as electrodes which fire electric signal to impact surrounding neurons in such a way to improve brain capabilities such as better memory, better mathematical reasoning, coordination or to help with things like depression and Alzheimer. Experiments on people are anticipated to begin in 2020. Is it the next step to a dystopian future or a beginning of a big advancement in technological progress for mankind?