Author Archives: Maciąg Agnieszka

Google’s chatbot child

Reading Time: 2 minutes

For many years, the idea of artificial intelligence was more and less dynamically developing with the captivating idea of performing tasks requiring human intelligence such as decision-making or sound or visual perception. However, this promising concept was rather a theme of a future than a current reality. Nevertheless, with the recent news coming from an engineer in Google who, due to the development of AI chatbot which gained its own perception, put forward a leave notice, there is a possibility that AI is here.

Blake Lemoine poses for a photograph in Golden Gate Park in San Francisco on Thursday.
Blake Lemoine, AI engineer in Google

Blake Lemoine, an AI engineer responsible for LaMDA (language model for dialogue applications) at Google, decided to leave the company after he acknowledged that the AI chatbot was created in order to develop the subject of chatbots within the organization and support the AI community. However, after launching the computer program LaMDA started replying in what we can assume is a human manner, namely, it stated that it is a person. Below, there is pasted the exact message as a response of LaMDA to the question of what we should know about it:

I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.

This message, shocking to many, was the reason Lemoine decided to leave the project and the company itself. As a response, Google claimed that the public disclosure of conversation with the so-called “sentient” was a breach of privacy and will seek further reconciliation with the engineer in question. The public debate was also followed by Google’s internal investigation and putting claims that we cannot perceive this chatbot as a person with the mind of a child.

This leaves us up for reflection on whether big tech companies are actually on the verge of developing an algorithm that can showcase the features of human intelligence (even if it is still in the child state – AI is there for learning) or maybe they have already recreated the human brain within a computer program. As for now with the official response of Google, the team comprised of ethical and computational researchers rejected the claim that LaMDA possesses any human-like features of intelligence and fits the purpose of conversational agent (chatbot).

Resources:

  • The Daily Show with Trevor Noah, Google Engineer Fired for Calling AI “Sentient” & Russia Opens Rebranded McDonald’s | The Daily Show, https://www.youtube.com/watch?v=uehdCWe6_E0
  • The Guardian, Google engineer put on leave after saying AI chatbot has become sentient, https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine
  • Bloomberg, Instagram Post
Tagged

Smart cities – dream of future or surveillance horror?

Reading Time: 3 minutes
Gandhinagar smart city's integrated central command and control goes fully  operational, Government News, ET Government

The idea that technology is reshaping every aspect of life is not a revolutionary statement anymore – rather a common knowledge across society. Recently, we experienced numerous technological advancements in the scope of livelihood and city improvement. Among many inventions, we can enlist things like autonomous cars, boldly introduced to the mainstream market by Tesla, or light sensors which allow for efficient utilization of energy in the buildings. 

By leveraging all these technological incentives, around 10 years ago, the concept of smart cities emerged on the horizon as an idea to make it easier for people to live in urban areas. As the foundation for developing such solutions, policymakers and tech enthusiasts picked the interconnection of the Internet of Things (IoT) – mostly sensors – and “urban” artificial intelligence algorithms. The reasoning behind those technologies comes from the IoT’s possibility to gather profound loads of data, whereas AI usage allows for its processing and development of further analysis, conclusions, and eventual recommendations. 

Before we go further, it is worthwhile to unpack the term “urban AI” as it will be necessary to understand the lining of presented inventions. As an example of it, we can analyze lamp posts packed with sensors and cameras which allow for intelligent light adjustment based on current weather, luminosity, and traffic. Each day when you go down a highway, the AI algorithm learns about the city and captures different urban features such as rush hours, sundown, or weather forecasts. By processing this information and historical data, urban AI implemented within this solution allows for adjusting the lamp posts in the most optimal way and the most sustainable one. Not only does such action can prevent possible accidents on the road due to the correct lighting, but also provide the city with savings based on leveraging daily sunlight. 

With the promise of improving functionalities, sustainability, and mobility, the concept of smart cities quickly became a buzzword under which many towns are now being developed. Analyzing the technological and regulatory framework, we can admit that there is still no chance of developing a completely self-maintaining city thanks to the introduced IoT, however, traveling around the world we can spot major improvements. As one of the best examples of a smart city, we can list Copenhagen which utilizes “wireless data from mobile devices, GPS in buses, and sensors in sewers and garbage cans to assess the state of the city in real-time and make improvements to decrease traffic, air pollution, and CO2 emissions.”

However, looking at those sensors and the loads of data they gather, we should also reflect on the possible dark side of smart cities. The regulatory framework is still not developed in many areas such as data protection and processing. This can cause ethical and legal dilemmas regarding who can access these databases and whether the government should have such profound access to the daily life of its citizens. The implementation of face recognition and sensors all over the city could actually provide information on your daily routine, who you meet up with, and where you go. As an interesting example of analysis of such data by the government, we can mention Social Credit System introduced in China. Is there a possibility that this practice will extend to Western countries with the bright idea of increasing cities’ sustainability by making them smart?

References:

Tagged

Will robots replace humans eventually?

Reading Time: 2 minutes

With hyper-automation as a raising subject in the market, the question of manual work conducted by humans is up for reflection. We all know that autonomous cars are there. We can go to the self-checkout store and it also won’t be a surprise to buy groceries without the cashier asking us whether we want to pay by cash or card. We can just agree that automation has been revolutionizing every aspect of our daily and professional life for several years. However, can humans be replaced by robots fully?

The Future of Manufacturing: Human and Robot Collaboration

With the current state of artificial intelligence, we can boldly admit that right now there is no such possibility. Therefore, we can happily live without the worry of being useless as the world still needs our manual work and intellect. However, according to Forbes, in the next 5 years, we can expect some shifts in the job market as there are several technological advancements in the scope of automation.

The first area of change will be definitely efficiency. Compared to the Robotic Process Automation (RPA) technology, humans without a doubt underperform even using their main professional commodity which is knowledge. Computational power allows for performing numerous business-critical processes in no time with a drastic limitation of FTEs costs. This leaves humans to advance more in aspects such as complex tasks, soft skills, and leveraging their creativity. There will be also space for human intervention in the scope of business strategic activities such as building partnerships and communicating with external clients. The main reason for that is that we are in the end still people who appreciate the connections and spontaneity which cannot be provided by cold-hearted machines.

Secondly, some of the jobs will cease to exist or will be drastically revolutionized. As the first example, we can look into shops where human resources are slowly being cut down as almost every aspect of them can be automated. As mentioned previously, we can have self-checkouts; with autonomous cleaning machines, there is no need for cleaning services, in addition to, filling up the shelves which can also be performed without human intervention. This reflection can go even beyond brick-and-mortar shops as the rise of e-commerce displaces them from the market. The same story can be spotted in the automotive industry with the possibility in the future of replacing drivers with artificial intelligence.

With many other examples, such as simple customer service, manufacturing, or accounting, we can all agree that alongside its joys, there exists a dark side to automation. The light side provides help to humans and relieves them from performing mundane, repetitive tasks, whereas the dark side demands continuous human resource upskill and slowly replaces the blue-collar work schemes.

References:

Tagged ,

A Greek God who will steal your money

Reading Time: 3 minutes

The ubiquitous use of technology in our life comes with a lot of challenges and threats. One of the most pressing phenomena right now is cybercrimes which rate is constantly increasing. According to the Statista, in 2020 there were over 23 thousand incidents of cybercrime in Poland which is an 88 percentage point increase compared to 2019. The most popular data breaches are interconnected with hacking (45% worldwide), errors (22%), social attacks (22%), and malware (17%). In this article, we will focus on the last scope, namely on a particular malware that goes by the name of the most powerful Greek God – Zeus.

Before we move to explain all the details of Zeus, we first need to briefly go through what is actually a Trojan horse. This is also interconnected with Greek mythology term is very intuitive when you know the story of it. As it was with the wooden horse where Trojans hid in order to attack the city without notice, it is the same story with a digital Trojan horse that intends to steal valuable data by misleading people of its true intent. 

So what’s up with Zeus?

Zeus is a Trojan horse malware that firstly occurred around July 2007. It is believed that it was developed by a 22-year-old Russian hacker who went by the name “Slavic.” Despite his young age, no one should underestimate his coding skills. According to FBI and U.S. Law Enforcement data, Zeus attempted to steal over 220 mln USD from personal and business bank accounts all over the world. Eventually, it managed to accrue only 70 mln USD which is still an enormous amount of money. 

How was it working then?

With the intent of stealing sensitive data, especially the financial one, Zeus can be introduced to a computer in two ways – phishing campaign or drive-in download. In 2007, social awareness of phishing was still low and the success rate of such campaigns was really high. They relied heavily on e-mails or text messages socially engineered in order for the potential victim to click on the link that led to infecting a particular device with malware. As Zeus was intended mostly for various versions of Microsoft Windows, phishing, in that case, was based on e-mails. The second option, a drive-in download, relies on the victim downloading a file from for example website without the knowledge that it is infected. In both cases, Zeus is profoundly hard to detect because it leverages the stealth techniques and mutates itself similarly to biological viruses.

After infecting a particular device, Zeus is monitoring closely websites that the victim visits and recognizes when a person is on a banking website. Then, it is possible to steal text user fills in web forms, gather keystrokes, and take screenshots when the mouse is clicked. Zeus’s actions can be interconnected with the term man-in-the-browser (MiTB) attacks where malware behaves the same way as there would be another person in the room with the user closely watching his actions. 

Where is Zeus now?

Due to the Zeus code leak in 2011, its activity lowered, and right now it is not perceived as a big threat to users’ financial data. With the rise of ransomware also Trojan horses were pushed into the back scene of hacking and stopped being that common. However, we shouldn’t forget that still there are a lot of threats hiding in the shadows of the internet, and wisely assess the websites we enter and the data we provide.  

References:

  • Niebezpiecznik, Jak działa ZeuS?, 2012, www.niebezpiecznik.pl/post/jak-dziala-zeus/
  • ZeuS, Dark Net Diaries podcast
  • Wikipedia, Zeus (malware), www.en.wikipedia.org/wiki/Zeus_(malware)
  • Malwarebytes, The life and death of the ZeuS Trojan, 2021, www.blog.malwarebytes.com/101/2021/07/the-life-and-death-of-the-zeus-trojan/
  • FBI, The Zeus Freud Scheme, www.upload.wikimedia.org/wikipedia/commons/thumb/2/2d/FBI_Fraud_Scheme_Zeus_Trojan.jpg/800px-FBI_Fraud_Scheme_Zeus_Trojan.jpg

Robotic Process Automation

Reading Time: 3 minutesThe idea that robots can replace humans is probably one of the most popular topos of science-fiction movies. Nonetheless, this far-fetched scenario can actually be real within a couple of years – thanks to nowadays development of Robotic Process Automation (RPA). It is a software that allows for mimicking human behavior with the use of artificial intelligence and machine learning. The base of this software are robots with a wide range of abilities; for example, they can: enter data, complete complex tasks, login, and log out of many systems. The remarkable trait of RPA is allowing to tackle complex repetitive business processes without human intervention.

As future entrepreneurs and employees, we should bear in mind this acronym. RPA comes with many advantages, such as cuts in costs, time, effort, and even cuts in the workforce. As the error rate of robots is at successful 0%, it is an entirely credible service that can be loosely scrutinized. Because of that property, RPA made its way into Accounts Payable (AP) Automation. Robots can handle alone invoices management, including actions such as reading invoices, extracting key details, pasting them into SAP, and sending emails regarding this document. Similarly, we can look into Legal Automation. As we all know, the traditional way of handling case documents in courts can be arduous. Piles of paper and data are a literal nightmare for lawyers that keep them away from work. This low-efficiency also influence the costs of legal processes. All that hopelessness can go away with the help of artificial intelligence (AI) and business process automation (BPA). Legal departments and in-house counsels can count on robots with help on e-discovery, analysis of data, and contracts management.

 

Looking into the Robotic Process Automation market, there are three foremost leaders of the RPA market: UiPath, Blue Prism Group, and Automation Anywhere. All of them are start-up launched in 2005 and earlier, but, in spite of venture capitals and fundings, they didn’t become successful until they entered the RPA market. But this market is continuously growing and becoming more creative. Many entrepreneurs are looking into innovations and integrate RPA with cognitive technologies: speech recognition and language processing or machine learning. Many businesses are setting this sort of automation, known as intelligent automation, on their strategic plans for this year.

 

Image result for rpa

 

To sum up, RPA is undoubtedly revolutionizing the whole entrepreneurship. It disrupts mundane work with the futuristic idea of robots. Not only can the benefits of automation be observed in all departments across huge companies, but also small households. With the use of Robotic Process Automation, we can finally excel in our work without wasting time on repetitive copy and paste formalization. As shown in the history of this technology, automation within the last years grew from its infancy to a fully demanded service. However, many fear that robots can take our jobs and increase unemployment rates. For example, government workers usually do tasks that can be easily automated, such as entering data or handling all bureaucracy. Similarly, low-qualified employees can ponder upon their place in digitalized work. On the contrary, we should think about how we can mitigate the repetitiveness of the job and complain about that error in the data, which forces us to re-enter all data into the system. This will enable us to focus on real work, customers, and finally enjoy the awe of life.

 

Resources:

The Vision of Tomorrow

Reading Time: 3 minutesImage result for mercedes avtr

From the myriad of new technologies and innovations presented at CES, the world’s largest electronics show in Las Vegas, there was one concept that stood out in the spotlight – Mercedes-Benz VISION AVTR. This advanced vision transportation, inspired by James Cameron’s iconic movie, Avatar, is part of the Mercedes-Benz strategic agenda for the next years, which strives for a more sustainable future. The connection to the film results from its essential message: the Mercedes-Benz plan’s line of reasoning perfectly fits with Avatar’s environmental and spiritual themes. During the reveal, James Cameron, the director of Avatar, joint with Mercedes-Benz members, highlighted the importance of sustainability and coexistence of technology and humans not interrupting the development of nature.

 

This futuristic concept vehicle is said to be entirely eco-friendly. It is electric, carbon-neutral, and supposed to interact with the surrounding world. The exterior design of the car is supposed to blend in with nature. After opening, the doors are designed in a way to imitate the dragonfly, and at the back of the car are located 33 individual hatches that resemble breathing scales on a reptile.

Related image

The futuristic design even extends to the interior part of the vehicle, as there is no steering wheel, and it is driven using biometrics. You are one with the car. This concept of using a driver’s biometrics is adapted straight from the Avatar movie and should imitate the symbiotic relationship between the driver and the vehicle. The comfort and infotainment of electronics in this vehicle are based on the driver’s hand, as by waving gesture, we are able to drive the car.

 

After getting in, the system of control lights up on your right hand, and by gestures and waving, you can operate this eye-popping vehicle. By touching the element in the center of the car, it wakes up. The lights and element go up and down to imitate breathing and heartbeat. Not only does this car drive straight and backward as a regular car, but it also has an ability to drive sideways. Large odd flamboyant wheels enable this car to move, little resembling crab-walk, perpendicularly. The vehicle went through the real-life test on the roads of Las Vegas and proved itself to be ready to hit the streets to saturate them with the aspirational and eco-friendly idea of the future.

 

 

The presence of James Cameron and his collaboration with Mercedes-Benz is not only all about sustainability, but also the longly-awaited sequel of Avatar movie. Whether it is just a costly marketing or a real car that could be seen in some time on the market and roads, we cannot argue that it is a successful fulfillment of the statement “Vision of Tomorrow.”

 

Resources:

Tagged , , , ,

The Perfect Wave

Reading Time: 3 minutesRelated image

 

Escaping the cold December noons, you travel to the Hawaii Pipeline. The beach, renowned for its waves, is gleaming in the sunshine. You feel the light breeze of salty water on your cheek and vitamin D soaking into your skin. Having your best holidays, you decide to try this perfect blend of warm water and high waves. You stand on your surfboard and try your skills in the ocean.

 

But this scenario is usually far from reachable. Not only does surfing is highly costly, but also, as the weather is often unpredictable, it can be hard to find high waves. In Europe, the cost generating factor is mostly the absence of ocean that can create such waves as we can observe in Hawaii or Africa. Even assuming that we have the money for such travel, we cannot be sure for 100% that our journey will satisfy our surfing cravings due to the, for example, lack of wind. To fight with both of these factors and raise the popularity of surfing across the globe, Kelly Slater, an eleven-time world champion in surfing, came up with an idea to create a perfect wave machine.

Image result for kelly slater surfing

In 2009, this surfing legend with a spark of creativity and lust for a groundbreaking change to the surfing industry teamed up with the University of Southern California’s professor, Adam Fincham, to find the perfect wave. Inspired by an iconic movie, The Endless Summer, they began working on outsmarting nature and came up with a ring-shaped lake in which the wave endlessly went around the center. But Slater’s idea was to create a rectangular pool that would perfectly imitate the ocean’s atmosphere and skills needed. Studying complex mathematics to find a formula that can crack the geophysical fluid dynamics, in 2015, they invented The Kelly’s Wave. In the middle of California’s country, they set up Surf Ranch and started disrupting the surfing industry.

Image result for kelly slater's wave poolRelated image

In this approximately 2,000 feet long and 500 feet wide pool, the artificial wave is created by a unique system of hydrofoils. The pool is filled with 15 million gallons of UV-and-chlorine-treated freshwater, and the main 100-ton hydrofoil is named “The Vehicle.” Running down a track with the help of more than 150 truck tires, it has the ability to create 6.5 foot-tall waves, which run at about 30 kilometers per hour. Of course, for beginners, you can adjust the height of the stream and its speed to provide them with a safe and enjoyable experience. This invention astonished The World Surf League (WSL), and they are currently the most significant stakeholder in his Kelly Slater Wave Company. Furthermore, from 2019, you can participate in the Freshwater Pro Championships held at the Surf Farm led by the World Surfing Team and WSL.

 

The truth is that Kelly’s wave changed the surfing industry, and, in spite of the numerous wave pools created, Slater’s one is the best in history. Outsmarting nature, the technology behind it enables surfers to create a contest or just enjoy the sport without even thinking about the weather. But this invention also divided surfers into two groups – one that admires surfing because of its search for a wave, vibing with nature and the “perfect imperfections” of the ocean’s dynamics, and the other one that is more focused on the skills, adjusting techniques and making surfing mainstream.

 

Resources:

  • https://www.newyorker.com/magazine/2018/12/17/kelly-slaters-shock-wave
  • https://www.cnbc.com/2018/05/04/kelly-slater-built-a-wave-machine-that-could-change-surfing.html
  • https://www.newyorker.com/podcast/the-new-yorker-radio-hour/the-new-yorker-radio-hour-extra-kelly-slaters-perfect-wave-brings-surfing-to-a-crossroads
  • https://www.surfertoday.com/surfing/the-facts-and-figures-behind-kelly-slater-surf-ranch
  • https://blog.theclymb.com/out-there/10-of-the-worlds-best-waves/

 

Tagged , , , ,

SMART DUST

Reading Time: 3 minutes 


Dust that creates from particles of soil, pollution, or volcano eruption is not an innovation. It is actually a very unpleasant component of wind that makes you cough. Also, you can think of it as the thing that gathers all over that thick book you got for your 8th birthday from some aunt who was hoping for your bright education. Nevertheless, at the turn of the 21st century came a revolution to dust. With many futuristic ideas set out in novels of such authors like Stanisław Lem or Neal T. Stephenson, one agency of the US Department of Defence started to ponder whether we can apply them in real life. This action sparked an interest of UCLA, and the project of smart dust emerged.

 

So, what is smart dust?

 

The best way to think about smart dust is as a tiny microelectromechanical system (MEMS). The other name for these systems is motes. They can be found in many shapes, but usually, smart dust devices are tiny sensors or robots. What is worth pointing out is that these nano-structured mechanisms come with numerous remarkable abilities. This seemingly crazy technological innovation straight from the future combines computing, sensing, and wireless communication in a small autonomous powered device. As smart dust is typically not more than a few millimeters, it can be brought into the environment just like a particle of dust. This feature enables it to enter any small space and for example, check its humidity. Despite smart dust small size, it can store an actually significant amount of data and wirelessly connect to other MEMS. It can also detect everything from light to vibrations and report the features of a particular region in a matter of seconds. Moreover, these devices are connected to the onboard computer with which they can process collected data.

 

For what we can use it?

 

This kind of Internet of Things (IoT) technology first emerged with hopes to apply it in the military dimension. Smart dust can be used in warfare to determine the traits of the battlefield, which is very important for creating a good strategy. As it is almost invisible, its abilities give the opportunity to drag the victory of smart dust’s user side. With this also comes its application in security departments, as MEMS can keep an eye on people and products. Furthermore, the knowledge of the environment is a matter of utmost importance not only for the military but also for agriculture. Because of that, smart dust devices pathed its way into farming as it can monitor particular crops to ascertain the amount of water, fertilizer, and pest control needs. The most extraordinary application can be found in a paper written by a couple of researchers at UC Berkeley as they devised neural dust which could be implanted on the human brain in order to measure its functioning. As this technology might sound way too futuristic for you to have it, you can actually print it in a 3D printer. There are a few commercially available schemes and projects which you can deploy with not so enormous budget.

 

 

To sum up, smart dust is undoubtedly revolutionizing in many industries. The most promising applications can be found in healthcare and security. It disrupts mundane solutions with cutting-edge technology. But there is also a dark side. As this technology is targeted into supervising and is barely traceable, it raises questions about privacy. How can we be sure anybody doesn’t spy on us? With that question in mind and knowledge that anybody can create smart dust and use it anywhere, the reflection elevates data protection to a whole new level.

 

Resources:

  • https://www.forbes.com/sites/bernardmarr/2018/09/16/smart-dust-is-coming-are-you-ready/
  • https://www.youtube.com/watch?v=wnnWrLt_RCo
  • https://www.forbes.com/sites/eliseackerman/2013/07/19/how-smart-dust-could-be-used-to-monitor-human-thought/#4027a3197ebf
  • https://www.youtube.com/watch?v=ufr7ZT1CNwA
  • https://en.wikipedia.org/wiki/Smartdust

New Art

Reading Time: 3 minutes

Works generated by AICAN (https://www.aican.io)

Most people – by which I mean those who sometimes visit an art gallery on holidays – perceive art as arduous, labor-intensive paintings or sculptures with a great aesthetic. We usually associate it with renaissance beaux-arts, Italian mannerism, Rubenesque characters or baroque era. Needless to say, as art is a very subjective phenomenon, there is no particular definition of it. But we usually agree upon the definition of an artist: a person who creates the art.

But what if the art was created by artificial intelligence? Can we perceive a painting created by an algorithm as an art?

Portrait of Edmond de Belamy, Obvious

For the New York art scene, the answer to this question is yes. In October 2018, the auction at the New York auction house Christie’s changed the game. An algorithm-generated print, Portrait of Edmond de Belamy, was sold for an enormous price of $432,500. The upshot of this occurrence was creating the so-called “AI-art gold rush.” Many programmers found their artistic niche on the market and started creating various algorithms to generate art. Among their works, we can find such as Psychodelic Wisteria, Unity Rising and Divided Sunshine by AICAN, DeepDream algorithm by Alexander Mordvintsev, and many others. 

 

The main machine learning class systems to produce AI-art are generative adversarial networks (GANs). It works following way: an artist provides GAN with a training set (a database where he or she uploads for example 300 Medieval European paintings), and then the algorithm creates a new canvas imitating the data set. This way, the system can generate authentically looking medieval art in no time with only an algorithm defined by the algebraic formula. Of course, we can influence the final output by creating one more random painting or playing with the training set: adding our sketches or drawings. But in many cases, AI-art is very eccentric due to its face deformations, as an algorithm does not know how to imitate a human face accurately.

 

So the heated debate is about whether we can perceive these algebraically generated pixels as art? 

 

The critics of AI-art argue that painting with no intent and message cannot be art. Not only can’t it convey any profound virtues and values as the machine doesn’t know basic human emotions, but also we cannot identify with the system. But isn’t art also a process? A process of creation, reflection, and a bridge between creativity and reality? Behind each of the algorithms stands a person who created the generative model and gathered the data set. Many of them created heaps of distinctive artworks to upload them to the data set. Moreover, to tackle the originality part, there is an evolution to GAN – a new system CAN: creative adversarial network. This system creates smooth abstract images in order not to produce anything that could be recognizable.

 

Over the years, society was always intrigued by the collaboration between art and technology. Twenty and twenty-first-century enabled us to see the output of it, which in many cases can be compared to human-made art. Between DeepDream or Magenta, where everyone can create their AI-art and Portrait of Edmond de Belamy, sold at the New York auction house, is a striking disparity but, arguably, both of them can be perceived as copacetic, eccentric works of art.  

 

 

References:

  • https://aiartists.org/ai-generated-art-tools
  • https://www.americanscientist.org/article/ai-is-blurring-the-definition-of-artist
  • https://thenypost.files.wordpress.com/2018/10/ai-portrait-ap.jpg?quality=90&strip=all&w=1033
  • https://www.aican.io
  • https://en.wikipedia.org/wiki/Generative_adversarial_network   (GAN)

 

Tagged