Tag Archives: Artificial Intelligence

Samsung’s NEON digital avatars shouted as artificial humans

The fact that Samsung will appear with its new project at CES 2020 has been loud for a long time. Everyone was wondering what artificial humans could be. And one thing is certain. After all the media noise around the project, everyone expected something completely different. Especially after prematurely disclosed material, which can be watched below.

 

What exactly is this project about?

NEON is the idea of Samsung researcher Pranav Mistry. The project emerged out of STAR Labs – Samsung Technology and Advanced Research Labs – and is funded by Samsung, but it’s not actually a Samsung company.

The NEON project is realistic human avatars that are computationally generated and can interact with people in real-time. At this point, each NEON is created from footage of an actual person that is fed into a machine-learning model. A Neon is meant to mimic real human appearance and emotions with its own personality and aptitude to behave like humans. Avatars can also remember and learn.

According to Pranav Mistry, NEON isn’t meant to replace Samsung’s digital assistant Bixby. What is more, it won’t be implemented in Samsung products and NEON operates independently.

 

Examples of the NEON’s application

Each NEON avatar can be customized for different tasks and is able to respond to queries with the latency of less than a few milliseconds. They’re not intended to be just visual skins for AI assistants but put to more varying uses instead. If we are to believe STAR Labs CEO Pranav Mistry, in the near future everyone will be able to license or subscribe to a NEON. The roles can be different: a service representative, a financial advisor, a healthcare provider, or a concierge. The founder also assures that NEONs will work as TV anchors, spokespeople, or movie actors. They can simply be companions and friends if people only would want it.

The first wave of Neons are modeled after real people.
Source: https://www.neon.life/

 

NEONs will work as TV anchors, spokespeople, or movie actors.
Source: https://www.neon.life/

 

What technology is behind it?

There are two main technologies on which NEON is based. The first is Core R3, which stands for reality, real-time and responsiveness. Core R3 is a kind of the graphics engine that powers avatars natural movements, expressions and speech. The second technology is Spectra, which is responsible for implementing artificial intelligence solutions. By this, the creator means intelligence, learning, emotions, and memory. Spectra is not ready for launch yet, but the company says it will present the technology later this year. At the moment it is still being developed.

Neon’s Core R3 graphics engine demonstrated at CES 2020.
Source: https://www.cnet.com/news/samsung-neon-artificial-humans-are-confusing-everyone-we-set-record-straight/

 

What about the uncanny valley?

When NEON avatars can become real comrades in everyday life, one should ask oneself whether the fact that they are so realistic is not a problem. This is specifically about the phenomenon of the uncanny valley, the scientific hypothesis telling that a robot that looks or functions like a human being causes the observers to feel unpleasant or even disgusting. When some people are wondering how STAR Labs has worked out every detail, others feel at least uncomfortable.

 

Why is everyone disappointed?

NEON is like a new kind of life, There are millions of species on our planet and we hope to add one more – this is what we heard from STAR Labs CEO Pranav Mistry before the CES 2020 presentation. It is no wonder that nobody got into awe when it turned out that NEON is just a highly detailed digital avatar. In addition, the demo presented at the show was fully controlled by people from STAR Labs. All the media hype made everyone wait impatiently for the show to finally find out that NEON still has a lot of work to do on its business.

It remains to not believe the haters because NEON avatars look really good and the potential of the project is certainly there. Thus, the final version of the STAR Labs venture has not come and we shouldn’t believe all the media reports. It will soon be clear whether a company can combine two ambitious technologies – the avatars and the AI – together.

 

Do you see a practical application of Samsung’s NEON in the near future? Would you feel comfortable if your teacher wasn’t a real person but Samsung’s NEON?

 

References:

[1] https://www.theverge.com/2020/1/7/21051390/samsung-artificial-human-neon-digital-avatar-project-star-labs

[2] https://www.theverge.com/2020/1/8/21056424/neon-ceo-artificial-humans-samsung-ai-ces-2020

[3] https://www.engadget.com/2020/01/05/samsung-neon-artificial-human-teaser/

[4] https://www.cnbc.com/2020/01/06/samsung-neon-artificial-human-announced-at-ces-2020.html

[5] https://www.cnet.com/news/samsung-neon-project-finally-unveiled-humanoid-ai-chatbot-artificial-humans/

[6] https://www.cnet.com/news/samsung-neon-heres-when-well-get-details-on-the-mysterious-ai/

[7] https://economictimes.indiatimes.com/magazines/panache/meet-neon-samsungs-new-ai-powered-robot-which-can-converse-sympathise/articleshow/73135240.cms

[8] https://www.livemint.com/companies/people/we-ll-live-in-a-world-where-machines-become-humane-pranav-mistry-11577124133419.html

[9] https://mashable.com/article/samsung-star-labs-neon-ces/?europe=true

[10] https://www.wired.co.uk/article/samsung-neon-digital-avatars

Tagged , , , , , , , , , ,

DeepL – a translator which surpassed Google Translate

A company doesn’t have to be a technological giant to create a product that exceeds the most popular programs of the same type. There is no doubt that in the world of automatic translation Google, Microsoft, and Facebook are the leaders. And yet it turns out that a small company DeepL has created a translator that sometimes exceeds the quality of the most popular programs of this type.

DeepL logo
Source: https://www.deepl.com/home

 

How DeepL was created?

It turns out that the key to the development of the translation service was to use the first own product, which is Linguee, a translation search engine on the Internet. The data obtained in this way became training material for artificial intelligence behind DeepL.

Interestingly, Linguee’s co-founder, Gereon Frahling, once worked for Google Research but left in 2007 to continue his new venture.

Currently, DeepL supports 42 language combinations between Polish, English, German, French, Spanish, Italian and Dutch. Already now, artificial intelligence is learning more, such as Mandarin, Japanese and Russian. There are plans to introduce an API, by means of which it will be possible to develop new products and implement the mechanism in other services.

The team has been working with machine learning for years, for tasks bordering on basic translation, but finally, they began a fervent work on a completely new system and a company called DeepL.

 

What is the advantage of DeepL?

Once again, people realized that AI is learning all the time – to the benefit of consumers, of course. The artificial intelligence behind the DeepL not only accurately recognizes words and selects translations, but is also able to understand certain linguistic nuances, perfectly copes with changed sentence patterns, which makes the result of a user’s inquiry extremely natural – as if it was written by a human being.

The company also has its own supercomputer, which is located in Iceland and operates at 5.1 petaflops. According to press releases with such equipment DeepL is ranked 23rd in the Top 500 supercomputers worldwide.

 

The statistics do not lie

The blind test compared the new product and solutions from Google, Facebook, and Microsoft. Professional translators were supposed to choose the best results of the mechanisms in the comparison without knowing the author of the translations:

DeepL’s blind testing results
Source: https://techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning/

 

But that’s not all, because in the BLEU results DeepL also gets great scores. BLEU is an algorithm for evaluating the quality of translation.

 

Why do others recommend DeepL instead of Google Translate?

The main advantage of DeepL in the context of Google Translate is much better knowledge (or rather a detection) of idioms, phrases, and phraseological compounds. Where, for example, Google Translate is weakening and literal meaning is being found, DeepL can surprisingly offer a more nuanced and much more specific language solution. The translation is not a literal translation of the text, but one that best harmonizes with the contexts and connotations characteristic of the words.

The passage from a German news article rendered by DeepL
Source: https://techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning/

The passage from a German news article rendered by Google Translate
Source: https://techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning/

 

No wonder that DeepL is gaining recognition all over the world. Here are some reviews:

Thanks to more French-sounding phrases DeepL has also surpassed other services.Le Monde, France

In the first test, from English to Italian, it was very accurate. In particular, he understood the meaning of the sentence well, instead of being stunned by the literal translation.La Repubblica, Italy

DeepL from Germany surpasses Google Translate. A short WIRED test shows that the results of DeepL are by no means worse than those of its best competitors, and in many cases even surpass them. Translated texts are often much more fluid; where Google Translate creates completely meaningless word strings, DeepL can at least guess the connection.WIRED.de, Germany

We were impressed with how artificial intelligence selects the translations and how the results of its work look afterward. Personally, I had the impression that on the other side sits a man who on speed translates.Antyweb, Poland

 

The DeepL tool has been made available to a wider audience – for free in the form of a website.

Now it is only a matter of waiting for DeepL to advertise its tool, because although it does not have a large language base, at first glance the accuracy of the translations definitely exceeds the most popular tools of this type.

It’s worth watching how the product will develop further as the current achievements of DeepL are really promising.

Did any of you choose DeepL instead of Google Translate?

 

References:

[1] https://techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning/

[2] https://www.deepl.com/blog/20180305.html

[3] https://www.dw.com/en/deepl-cologne-based-startup-outperforms-google-translate/a-46581948

[4] https://www.forbes.com/sites/samanthabaker1/2019/06/27/will-this-german-startup-win-the-translation-game/

[5] https://www.deutsche-startups.de/2018/07/05/deepl-koelner-uebersetzungskoenig-macht-millionengewinn/

[6] https://www.forbesdach.com/artikel/davids-erbe-und-igels-strategie.html

[7] https://www.letemps.ch/societe/deepl-meilleur-traducteur-automatique

Tagged , , , , , , ,

BOSTON DYNAMIC’S ROBOT DOGS MADE THEIR WAY INTO POLICE WORK

If you’re familiar with sci-fi anthology series called Black Mirror, you might think of one of the episodes from the 5th season titled “Metalhead”. Apparently, it’s not a fiction anymore, it’s our today’s reality.

The robot dog named Spot is an invention that Boston Dynamics started first developing out of MIT. According to the state’s nonprofit branch called American Civil Libertie Union, these robots are now working with the Massachusetts State Police’s bomb squad.
The ACLU  accessed a memo of agreement document between the state and Boston Dynamics through a public records request.
The request letter that the organization wrote in the records is the following: “The ACLU is interested in this subject and seeks to learn more about how your agency uses or has contemplated using robotics.”.
The ACLU collected all the valuable information about the new partnership, including that Boston Dynamics leased the Spot robot dog to the police force for 90 days between August and November. Because there is no detailed information revealed for a public eye, we don’t know how they are exactly using these machines. The only information that a state police spokesman David Procopio provided about Spot is:  “for the purpose of evaluating the robot’s capabilities in law enforcement applications, particularly remote inspection of potentially dangerous environments.”.
Michael Perry, Boston Dynamics vice president of business development stated, that the company is aiming to make Spot useful for different areas like oil and gas companies, constructions or entertainment.
Perry said he anticipates, that the police is using Spot by sending it into areas that are too dangerous for human being.

The abovementioned robot dogs are constructed for a general-purpose. They have an open application programming interface, which means, a warehouse operator or in this case, a police department can customize them with their own software. From what we can read on the internet, State police claims they didn’t use that feature yet.
Even though, Perry claims the robot won’t be used in the way that would harm or intimidate people, the ACLU, as well as the internet community, are worried about the situation. Currently, the major issue is the lack of transparency in the overall robotics program.

There are various conspiracy theories made by netizens. They mostly predict worst-case scenarios.
The question is whether this invention is safe for the human race. But let’s face the truth. Everything could be dangerous if used in the wrong way. If people working on these machines will program the algorithm allowing them to shoot to people, they’ll follow the order.
Personally, I’m amazed and don’t really know which adjective to use other than “amazing” in this case. I applaud Boston Dynamics for creating the algorithms of their breathtaking machines.

Continue reading
Tagged , , , , , , ,

Neuralink – a way of merging Artificial Intelligence with human brain?

As from 2016, Elon Musk started a new company called “Neuralink”, having the threat of AI taking over the world he created the mission of merging it all. Artificial Intelligence and humans brain. What makes it possible is an almost non-invasive surgical operation, by a calibrated robot which implants threads as tiny as 4 μm to 6 μm in width between blood vessels in our brain. For comparison, human hair has 75 μm.Znalezione obrazy dla zapytania neuralink

 

 

How does it work? In shortcut, these threads would be used as electrodes which fire electric signal to impact surrounding neurons in such a way to improve brain capabilities such as better memory, better mathematical reasoning, coordination or to help with things like depression and Alzheimer. Experiments on people are anticipated to begin in 2020. Is it the next step to a dystopian future or a beginning of a big advancement in technological progress for mankind?

References:

https://en.wikipedia.org/wiki/Neuralink#Electrodes

https://en.27s_breadthwikipedia.org/wiki/Hair%

https://www.youtube.com/watch?v=r-vbh3t7WVI

https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html

https://www.boldbusiness.com/digital/elon-musks-neuralink-brain-chips/

 

Tagged

FACEBOOK’S NEW ACQUISITION OF TECH START-UP

On 23rd of September this year, Vice President of augmented and virtual reality
announced Facebook’s agreement of acquiring CTRL-labs.

CTRL-labs is tech start-up that is still in process of developing wristbands that
would allow us, human beings connect and control digital system just by using
our intuition.
It’s quite fascinating how our brain signals would have conduct computers without
any physical interaction.
I won’t bore you with intricacies of detailed structure and function of it, but those
who are interested can click here and see the start-up CEO’s presentation.
He explains how these bracelets would actually work(go to 5:40).

As you would expect, some people are not so happy about the idea of
Facebook having access to the data of people’s thoughts after platform’s
scandal involving their unethical behavior by sharing Facebook users data with
third parties without permission.

Please leave your opinion about platform’s new acquisition.
How Facebook’s access to our nervous system can affect the reality?

References:

  1. https://techcrunch.com/2019/09/23/facebook-buys-startup-building-neural-monitoring-armband/
  2. https://siliconcanals.com/news/startups/facebook-acquires-mind-reading-startup-ctrl-labs/
  3. https://www.youtube.com/watch?v=D8pB8sNBGlE
  4. https://www.facebook.com/boz/posts/10109385805377581
Tagged , , ,

Lightning fast MIT Robot

MIT is worldwide known for its robotics research. In the past, they managed to create a robot breaking the world record in solving a Rubik’s Cube in only 0.38 seconds, the first four-legged robot to do a backflip, etc.
Sounds impressive, but with their newest development, they can’t just look cool, but it can be used in many ways and bring the robots on to the next level of productivity.
Picking up objects and flipping them around is easy for people. We do it every day, e.g., when we are trying to take notes at University, work, or at home. We pick up the pen and bring it into the right position and start writing. The same scenario is when we are eating a sandwich: We move it a little bit to bite from the other corner.
For our robot friends, however, it is tough to pick up things without either dropping them or destroying them and then to add the factor of turning the object they just mastered to hold? It sounds like a difficult task.
Therefore it took robots a long time to plan and calculate all the factors like geometry, friction, all the possibilities of how the object can be turned, etc. This whole process took tens of minutes previously, which sounds still impressive, bearing in mind that if we measured and calculate these numbers, we would sit there for hours and probably still fail.
MIT mastered to bring down the planning time of the robot to less than a second.
How is that possible? The robot is pushing the object against a stationary surface and slides its claw down the object until it has it in the right position.

For the future, this can mean that instead of a specialized tool like a screwdriver, machines would have more something like a hand, giving them the ability to pick up different kinds of tools and do various tasks.
This improvement would most likely save the companies space and also money since, for multiple steps, they would need one robot.

This is another case were thinking out of the box, by simply using the surroundings, has a huge effect.

References:

https://bigthink.com/technology-innovation/rubiks-cube

http://news.mit.edu/2019/mit-mini-cheetah-first-four-legged-robot-to-backflip-0304

https://techcrunch.com/2019/10/17/mit-develops-a-way-for-robots-to-grasp-and-manipulate-objects-much-faster/

http://news.mit.edu/2019/robotic-faster-grip-adjust-1017

Tagged , , , , , , ,

The use of facial recognition technology on birds

Today, I want to demonstrate you a great example of how object recognition technologies based on machine learning:

1) becoming widely available and do not require rare genius programming skills to get the result.

2) can be greatly trained even on a very modest in size data sets.

The article, that I have read some time ago, tells how a bird lover and part-time computer science professor, together with his students, taught the neural network to recognize the bird species and then — and that impressed me a lot – to distinguish individual species of woodpeckers, who flew to the bird feeder in his yard.

At the same time, 2450 photos in the training sample were enough to recognize eight different woodpeckers. The professor estimated the cost of a homemade station for the recognition and identification of birds at about $ 500. This is really can be called technology for everyone and machine intelligence in every yard.

Moreover, this technology can really help birds. As Lewis Barnett, the inventor of this technology wrote in his article : «Ornithologists need accurate data on how bird populations change over time. Since many species are very specific in their habitat needs when it comes to breeding, wintering and migration, fine-grained data could be useful for thinking about the effects of a changing landscape. Data on individual species like downy woodpeckers could then be matched with other information, such as land use maps, weather patterns, human population growth and so forth, to better understand the abundance of a local species over time»

As some people correctly noted, this technology has also some great commercial potential. Just imagine that camera traps will be able to recognize birds that harm your fruit trees and than activate  a device that make a large noise to scare away pests.

Sources:

https://theconversation.com/i-used-facial-recognition-technology-on-birds-106589

Tagged , , ,

Whoops! Sounds like AI has a real gender problem


There is a global economic gender gap in the AI workforce, which needs to be addressed as soon as possible if the industry doesn’t want to suffer soon – says one of the last WEF’s (World Economic Forum) articles.


Результат пошуку зображень за запитом "gender gap in the AI"

Almost 80% of professionals with AI skills are male. Besides, a gender gap even three times larger than in other industries.

It’s no secret that demand for AI skills is increasing in demand second by second, while the industry might miss out on opportunities to innovate if it excludes half the population from the development process. Imagine you, how only a few women will then be able to participate in the economy as a whole! Still we all aware of the importance of diversity in all her manifestations, which mainly improves innovation, and technology itself.

“In an era when human skills are increasingly important and complementary to technology, the world cannot afford to deprive itself of women’s talent in sectors in which talent is already scarce”

In addition, the research found that women working in AI are less likely to be positioned in senior roles. The data demonstrate that women are generally work in the use and application of AI, with common positions including data analytics, research, and teaching, whereas men tend to work in the development of the technology itself as software engineers, heads of engineering or IT, or as chief executives. In short, women are “growing but not gaining”. Male AI professionals will continue to outnumber women, even as both genders continue to gain AI skills. At the current pace, WEF estimates it will take 202 years to close the gap women face in the workplace. That figure is based on differences in earnings, workforce participation and the number of women in top jobs.

Пов’язане зображення

Solution!

Remember, there is always a way out, we should just make a step forward! To break the cycle of gender imbalance, it is critical to ensure that women at all stages of their careers are being inspired to actively take part in the development and use of new technologies and it concerns not only the case with AI.

“Industries must proactively hardwire gender parity in the future of work through effective training, reskilling and upskilling interventions and tangible job transition pathways, which will be key to narrowing these emerging gender gaps and reversing the trends we are seeing today. It’s in their long-term interest because diverse businesses perform better”

Not less important is the understanding of the ways that gender gaps manifest across different industries, occupations, and skills. Research and data can illuminate the persistent challenges faced by women while making decisions concerning employment.

References:

https://www.weforum.org/agenda/2018/12/artificial-intelligence-ai-gender-gap-workplace/

http://time.com/5481899/world-economic-forum-gender-gap/

https://www.independent.co.uk/life-style/women/women-ai-automation-lose-jobs-gender-gap-report-2018-world-economic-forum-a8688571.html

https://www.irishtimes.com/business/technology/concerns-over-huge-gender-gap-in-artificial-intelligence-workforce-1.3740900

https://futurism.com/lunar-lander-spaceil-spacex-delay

 

Tagged , ,

Lets build an experience instead of another chatbot!

"Research across hundreds of brands in dozens of categories shows the most effective way to maximize customer value is to move beyond customer satisfaction and connect with customers at an emotional level."

– Harvard Business Review

If you need to make an insurance claim, you use an online form for that. In the case of opening new accounts, you simply fill in the form, then benefit from the quick mail response. Maybe you would like to take out a loan? Oh, it should be a piece of cake to speak with a chatbot briefly and learn everything about everything. Do you recognize yourself in the lines above?

The answer is unequivocally yes. We are constantly connected to a network. The companies improved efficiency as well as cut their costs by shifting to the digital contact with customers. But on the other side of the coin – for many businesses the emotional bond has been violated by digital strategy and efficiency that directly affected the cost of the brand, revenue growth, and outflow. This backdrop sets the scene for incredible innovation, as it became pretty complicated for clients to differentiate the values of the brand of two different companies. It’s all about digital commoditization.

Результат пошуку зображень за запитом "faceme"

FaceMe is a world-leading provider of Digital Humans via its Intelligent Digital Human Platform, created on the AI basis, which expands the brand opportunities to build reliable interaction with clients in the real time, based on customized content and unforgettable personalities, which build an emotional connection using the power of the human face. IBM Cloud technologies are used together with high-capacity of IBM Cloud bare metal servers in order to provide endless scalability of this technology for hundreds of simultaneous conversations. It’s such an organization that – to put it in a nutshell – enables organizations to reduce the cost to serve at the same time as enabling opportunity for growth and improving customer experience. The company now operates in New Zealand, the US, Australia, and Europe, working for global brands such as Vodafone and UBS. It’s available for customers through browsers, mobile phones or kiosks.

Результат пошуку зображень за запитом "faceme"It is estimated by analytics that within the next decade nearly 85 % of communication with customers will be implemented only via digital channels. Mobile applications, web-portals, and chatbots will become even faster and more convenient, but the companies might have a difficult time building bridges with the clients in such a competitive environment.

I’m pretty sure that our future reality will draw a lot of eyeballs. At least for the reason that Digital Humans will process question in just 100 milliseconds, during which they will converting text from a chatbot into key human qualities through both the ability to respond with speech, facial expression or gestures and also apply dynamic reaction based on customers behavior and emotion. The client, in turn, perceives almost immediate response that means that conversation flows good, and feels as comfortable as talking with the real agent.

 Emma Lavelle talks to “Vai” who was set up at New Zealand’s Auckland Airport to answer biosecurity questions from travellers in a bid to reduce the workload of biosecurity officers.

Bringing emotional connection to the digital world is as crucial in the context of business as in solving pressing issues, related to health and well-being, education, environment, and many other spheres. Take the example of psychological health. You know, the first important step is just to make patients talk. As studies have shown, 63 % of people would prefer talking about problems of their psychological health with Digital Humans. Therefore, there is a great opportunity to make a valuable contribution to society. FaceMe also works with the Centre for Digital Business to create digital reading instructors, who can help children with reading problems, for whom there is a shortage of qualified teachers. One more potential use case is a provision of consultations and emotional support for patients, recovering after heart surgeries.

Technologies of IBM and FaceMe represent a powerful combination, which intends to change the customer’s experience all over the world. Remember, there is no restriction in our ambitions and ability to make a positive contribution in the society by introducing emotional connection in the digital world.

Links:

Tagged , , , , , , , , ,

Alibaba’s AI Customer Service is much better than Google Duplex

A long-standing goal of an interaction between humans and computers has been to enable people to have a free conversation with machines, as they would with each other. In recent years, we have witnessed a revolution in the ability of computers to understand and to generate natural speech, especially with the application of deep neural networks.

One of the inventions in this area was Google Duplex. As you probably know, Duplex is a technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine. For example, Duplex can automatically reserve a table for you in a restaurant, using a phone call to a manager.

While Google is still testing and developing their new system on a small amount of Pixel phone Users, another giant tech company Alibaba already has a working model. It is used not for restaurants, but for an even narrower niche – the delivery of goods. At an annual AI research gathering, the e-commerce giant demoed a sample conversation where the voice-assistant was tasked to ask a customer where the package should be delivered.

The most amazing thing is that Alibaba’s voice assistant was able to deal with some controversial situations during the dialog such as interruption (pauses), nonlinear conversation (customer starts a new line of inquiry), and implicit intent (customer doesn’t explicitly says what he actually means). It is an amazing new which also once again underlines the superiority of China in the field of artificial intelligence, by the way. Currently, the agent is used only to coordinate package deliveries, but it could also be expanded to handle other topics.

Sources:

fossbytes.com/alibabas-ai-customer-service-is-way-ahead-of-google-duplex/

https://www.technologyreview.com/s/612511/alibaba-already-has-a-voice-assistant-way-better-than-googles/

https://www.techspot.com/news/77782-alibaba-already-has-voice-assistant-way-better-than.html

https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html

venturebeat.com/2018/11/21/googles-duplex-is-rolling-out-to-pixel-owners-heres-how-it-works/

 

Tagged , , , , ,