The company installed the cameras in February 2021, explaining that they were needed for safety.
Drivers of Amazon’s delivery service in the US must now sign a “biometric consent” form in order to continue working for the company , The Verge writes on March 24.
It involves agreeing to collect data from cameras installed in delivery vans. Drivers must agree to the use of “certain technology, including cameras” as a “condition of delivering packages for Amazon,”  according to Vice.
The drivers in question are those who rent Amazon vans under the Partner Service Delivery programme. According to Vice, about 75,000 drivers could be affected.
What kind of data the company will collect depends on what kind of equipment is installed in the vans, The Verge suggests. But the agreement implies a wide range of data to be collected. This includes cameras using facial recognition to confirm driver identity and connect to an account, according to Vice.
Among those collected, for example, is data on the van’s movement, including speed, acceleration, braking, turns and distance travelled, “potential traffic offences” such as speeding or unbuckling a seatbelt. As well as “potentially dangerous driving behaviour” – when the driver is distracted from the road or falling asleep.
The company says it installs the cameras for “safety” and “to improve delivery”. But some drivers have already refused to sign the agreement.
Amazon installed  artificial intelligence cameras in vans rented by drivers participating in the Partner Service Delivery programme in February. They have built-in software that can detect 16 different safety issues, including if drivers are distracted, speeding, braking sharply and more.
In early March, senators from five US states wrote to Amazon  saying that the use of surveillance cameras in delivery vans “raises important privacy and worker oversight issues that Amazon must respond to”.
In September 2020, human rights activists had already spoken out about the hard work at Amazon – for example, the company has an electronic employee monitoring system, there are cameras in warehouses and drivers’ locations are constantly recorded.
The ills of psychology and psychotherapy have been the same for years – poorly structured data, a multitude of schools and therapeutic methods, the effectiveness of which is poorly or almost completely unverifiable.
Will the algorithms be able to help the man who created the imperfect, flawed system?
According to a 2016 meta-analysis published in the Psychological Bulletin, which was conducted over the past 50 years, psychology has made no progress in predicting acts of suicide. Subsequent cross-sectional studies laboriously try to sift the grain from the chaff and at the same time show how much pseudoscience is in psychology. As an introduction, it is worth mentioning the famous psychologist who almost has the status of a star – Philipha Zimbardo, whose experiment also did not survive the test of time and turned out to be an ordinary fraud, which was discovered by a French documentary filmmaker.
Despite the crises that psychology is going through, there are also areas where attempts are made to apply algorithms and solutions bordering on the use of artificial intelligence and machine learning. Let’s take a look at the most interesting solutions that scientists have managed to create so far. We will omit historical algorithms, such as ELIZA, which is one of the first intelligent interlocutors-therapists, and focus on the subjective, more advanced three.
Say hi to Ellie, avatar therapist.
Despite the fact that this is an algorithm that is already old, despite the poor graphic interpretation of the therapist, it still makes a big impression. The algorithm analyzes our facial expressions, gestures, voice timbre or eye movement and conducts the conversation as in classical therapy. Studies have shown that veterans (who participated in the pilot) suffering from post-traumatic stress disorder were more likely to answer questions to a digital “human” than to a real therapist. The results achieved by the algorithm were appreciated by clinicians who were impressed by the effectiveness of the algorithm and the openness of patients during the conversation with the avatar.
A much more specialized approach is proposed by Spring Health, which creates algorithms that monitor only mental illnesses.
We are able to predict whether the condition of a given patient will improve after the selected therapy – says Adam Chekroud, co-founder of Spring Health
The Spring Health algorithms are used, for example, by employees of Amazon and Gap Inc. For now, however, this diagnosis consists in selecting appropriate drugs based on questionnaires filled in by the patients themselves.
This process is, of course, largely automated and can boast very high treatment effectiveness, but we are still a little short of artificial intelligence, which would be a full-fledged and effective therapist. For now, the effectiveness of the algorithms depends on the analysis of a sufficiently large set of data. The algorithm analyzes data on several hundred thousand patients, taking into account all their medical data, and on this basis creates a whole network of relationships between the type of patient and the therapy that turned out to be the most effective.
On the basis of such a network of dependencies, algorithms are already able to identify people with mental disorders long before their diagnosis by any human doctor.
So we can risk a statement that in the future machines will be responsible for healing not only our body, but also our mind. The only question is how distant this future is.
Sources: – American Psychological Association, After Decades of Research Science Is No Better Able to Predict Suicidal Behaviors. – https://bit.ly/3cszETI – https://bit.ly/3sy9ogi – https://on.mktw.net/3w64GIN
Around 50 years ago it happened to be clear that the world will never be the same place again. The internet revolution has started. Today, we are continuing that process and the pace of changes is only increasing. Overall trends show that the differences between technological domains are blurring to answer the demand of modern society. The result of that is the birth of IoT applications in various domains, both on the specialized and mass market.
This in turn gives tremendous business and career opportunities for all types of specialists from various domains, because:
There is no smart city without smart sensors.
Smart sensors need an embedded AI.
Embedded AI requires data scientists.
This short scene shows the mechanism on how different technologies (in this case AI) are intertwined with the IoT. Unfortunately, this causes a high degree of uncertainty and is making business/career decisions more complex. In order to act wisely, you need a good understanding of the roots of these technologies.
Internet of Things (IoT)
As you probably might guess the Internet of things is a concept that is a little mixed up. So, I would like to divide it into two components: the internet & the things to introduce it that way.
Firstly, we need to look at the very basics and acknowledge that the existence of our universe is possible thanks to the 4 fundamental forces of nature: Gravity, The weak force, Electromagnetism, and The strong force.
For us, the most important is Electromagnetism, which throughout the past centuries we were trying to tame and benefit from. It was pretty unsuccessful, until the 19th century when we started to process the energy in more sophisticated ways.
At that time we uncover the potential of radio frequencies and made developments like electric light, transformer, wireless communication, etc.
If you are familiar with the early history of the internet you probably recognize the name ARPANET, which was a military program developed to connect the university centers in the United States. The next step was the creation of Internet Protocol (IP), soon after that the 1G cellular system was born and the rest of the story you know.
As you can see the Internet was initially created to connect specialized machines with at least a bit of computing power such as computers and in the later steps mobile devices. On the other hand, back in the ’70s, there were initial applications of machine-to-machine communication (M2M).
You might ask, how is today’s IoT different, from what it was in the past?
A great answer to this question is a quote by iotforall:
„The biggest difference between M2M and IoT is that an M2M system uses point-to-point communication. An IoT system, meanwhile, typically situates its devices within a global cloud network that allows larger-scale integration and more sophisticated applications. Scalability is another key difference between M2M and IoT. „
Let’s quickly visualize the power of IoT in practice.
Giving the example of healthcare true flawless experience might look like that: mattresses monitoring your sleep, chairs monitoring your posture, various medical devices testing your condition, microphones collecting the data about your cough to monitor lungs conditions, etc. This in turn shows that we need billions of more connections than we have today and it only is possible with the new generation of telecommunication architecture.
The answer to this demand is an improved technology of the 5th generation (5G) of cellular communication protocols, rules, and methodologies issued by IEEE (Institute of Electrical and Electronics Engineers)
It is mainly focused on:
The wide availability of the signal (enhanced mobile broadband)
Significantly decreased latency compared to 4th generation technology, especially useful for applications requiring critical reliability e.g. remote surgical operations
Enhanced machine to machine communication up to 1 mln devices per 1 square kilometer
Technology still faces many challenges both in engineering and societal aspects. As you can see many challenges addressed by the IoT advocates are not going to be solved with the rise of 5G architecture. Cellular of the fifth generation might soon become outdated and that is why various R&D departments are already working on the 6G.
It is projected that the next few decades are going to be a constant transition process between new generations of cellular technology. Henceforth, it is obvious that 4G, 5G, 6G will overlap it selves and it would be hard to talk about one of them individually in terms of predictions. That is why I would like to describe more in detail the future potential business opportunities as the result of these three.
It is projected that by 2030 the overall telecommunication advancements will allow:
Transfer of senses like smell
I would like to disclaim that this is going to be my private observation, that with the rise of political trends and ideas to give every person a salary from the first day of birth, this will have hundreds of results (not rating good/bad) and one of them will be the increasing boredom. The rise of all sorts of gaming apps using this feature is just a marvelous way to create new experiences and disrupt the entertainment sector.
Inch perfect localization services
Space! The answer to an unbreakable internet connection all around the world. Satellite-based internet is a huge opportunity to enter growing markets like e.g. Nigeria. Many of the services that Google wanted to provide to this country failed due to low internet connection. In the next decade, it will change for the better. It is worth to mention almost everybody in Nigeria is a smartphone user.
Another aspect is the production and management of the data gathered throughout the satellites. Geospatial analytics is a great source of knowledge e.g. ability to measure the amount of gasoline in Saudi Arabian Oil Tanks and send this data to the NYC stock exchange to see how is the relation of stock prices to the real demand.
Not to mention GPS advancements that will make navigating, targeting, etc. more efficient.
How to benefit from it?
I hope that now, you have just a bit broader sense of how it all will be in the future and that now we are just starting.
Huge problems usually are followed by huge earnings if solved. Here is the list of problems and few interdependencies that might help build your future business model/career.
Connectivity — discovering the ways to establish a reliable wireless connection. As we already know, the connection is just a radio wave of a certain frequency, hence it can be modulated. This modulation could be „smart” in the future thanks to Deep Learning and Machine Learning.
Continuity — optimizing the battery life. How to predict long-term battery life by analyzing data from charge-discharge cycles?
Compliance — be up to date with evolving regulations. As AI is improving financial services compliance, it has a significant potential for smart management of constantly changing regulations.
Coexistence — billions of devices per km2 means billions of data that need to be processed. Smart sensors with embedded AI is a crucial element of industry 4.0
Cybersecurity — a secure connection is the most important part of IoT. Which technology will make it possible? Blockchain IoT has a projected 45% CAGR, what will be the successor of NFC (near-field-connection)? Many questions little answers = good business opportunity.
The transformation process will not happen overnight, 5G is very different from 2,3,4G because it is now when the Industrial revolution will be possible. Adding a coronavirus and the tendencies to move industry production from China back to Europe all give opportunities for businesses focusing on adapting Europe’s industry to new circumstances. All solutions connecting the old infrastructure with the new technologies are going to be crucial.
In the IoT world embedded AI will be used very often and a certain degree of independence requires applied algorithms to be reliable. To do that the algorithms will have to be certified. If you like law and science maybe such a research center is a good career choice.
As space will be a fundamental part of IoT systems and you would have an opportunity to work in a NewSpace company it is highly possible that you will benefit financially and you will have long-term stable employment.
Motion is everything. We are entering a time when the motion of people, cars, ships, etc. will be tracked and the data produced this way will be the new oil of our century. To benefit from it, you need to solve some issues named as 5 C’s of IoT and follow the newest media coverage about Industry 4.0 and the space race.
Artificial intelligence is one of the fastest growing fields today. It is currently being used in several disciplines, across the globe. However, this technology needs to be monitored to prevent any bias or negative impacts from affecting the world.
Responsible AI is developed to help prevent any harmful implications of AI, by having policies related to bias, ethics and trust. It is relatively new; however, many companies are favoring its incorporation into their infrastructure. Responsible AI caters to managing and regulating intelligent systems, to make sure they do not harm the society.
There are three major contributors to consider while determining if a certain piece of AI tech is suited for the society:
Awareness of the accountability of AI research and development is necessary. That is, who is to blame if an intelligent machine makes an error? The research should be capable of determining the possible effects of releasing a system into the world.
AI algorithms learn from the data they receive. However, they should be capable of reasoning and justifying their actions.
Transparency is required to make sure people know what a particular intelligent system does, and what it is capable of. It requires governance to ensure it delivers societal good.
Responsible AI research is being done across different platforms to devise rules and regulations to govern AI. RRI (Responsible AI Research and Innovation) is an interactive and transparent process that holds individual or groups of innovators responsible for the acceptability, desirability and sustainability of a certain technology in the society. It can be implemented between different parties using the following approaches:
Permanent individuals/groups from different backgrounds can discuss the innovation and its possible outcomes. This includes ethical review boards within organizations.
Set of rules and guidelines that should be followed by the outcomes of research and innovation, so its ethical, legal and safe.
Code of conduct detailing on the behavioral choices for stakeholders in different sectors
Industry standards that set a minimum threshold for the safety required for testing and development of new technology.
Approaches and methods of keeping track of the future impacts of a particular technology like scenario planning and modeling.
Since AI is prone to biases and misjudgments, RRI can ensure the technology is being utilized for the ultimate good of the world. At the end, human input is required to make sure technology is not going against humanity and that the processes are following the ethics, trust and bias standards. This improves accountability and promotes a better public image of such systems.
AI in the transportation field is believed to grow its market size to $10.3 billion in next 10 years. Many companies in trucking industry are going in direction of being fully automated. Due to law changes in some states in the US, self-driving trucks are going to be able to drive in a group of two or more. Waymo – american company (leader in autonomous driving technology) contacted with german company Daimler (truck industry) and they decided to collaborate and deliver Level 4 (High Driving Automation) vehicles. That means trucks will be able to react and intervene, if there is a dangerous situation on the road. However system does not require human interaction in most of the cases, it still has an option to manually override. For now Level 4 vehicles can only operate within urban area where average speed is 30mph. Waymo’s projects are now mostly focused on highway transportation and small freight carriers.
Before this collaboration american company has been testing their technology in driverless cars in Arizona. They started a project with fully automated taxi service. Why so many people see it as the future? AI is commonly used in predicting and detecting traffic conditions. Because of the highways, trucks with AI technology will not only save money and lower emissions, but also increase efficiency. Without any human interactions needed, we can manage our deliveries in a simplier way. In other way adding AI reduces operational cost. We have to hire less people and the machines are working every day without a break. With programmed possible paths of pedestrian and cyclists, the possibility of an accident will decrease.
Unfortunately, current technology cannot replace humans entirely. Beside driving, truck drivers are also taking care of loading, inspection of the vehicles and customer service. Issue which I should have mentioned first, is very high costs of creation and more important – maintenance. If a company wants the system to constantly observe the traffic and improve itself based on patterns, the software and hardware need to be updated very often. Another con is that machines do not have emotions, so whether we want or not, we have to program them to choose in a fatal situation. For example we have to decide, if we are making the driver’s or the pedestrian’s life a priority. AI is the next step in logistics and transport development. There will always be pros and cons of every progressive thought, but we can not be intimidated by it. With using out of box thinking, this technology could become very useful to us and help us improve our skills in managing communication.
Reading Time: 3minutesIt has been exactly a month since we knew the winner of the latest edition of TechCrunch Disrupt Berlin 2019. Congratulations to the newest Startup Battlefield winner, Scaled Robotics, who designed a robot that can produce 3D progress maps of construction sites in minutes.
Scaled Robotics wins the Startup Battlefield Source: https://techcrunch.com/2019/12/12/scaled-robotics-wins-startup-battlefield-at-disrupt-berlin-2019/
How does Scaled Robotics work?
The startup has created a robot that trundles autonomously around construction sites, using a 360-degree camera and custom lidar system to systematically document its surroundings. All this information goes into a software backend where the supervisors can check things like which pieces are in place on which floor, whether they have been placed within the required tolerances, or if there are safety issues like too much detritus on the ground in work areas. The data is assembled automatically but the robot can be either autonomous or manually controlled.
Why construction companies need Scaled Robotics?
Construction is one of the world’s largest but also most inefficient and wasteful industries. There are estimates that nearly 20% of every construction project is rework. The problem of waste and rework is so widespread that the industry on average operates on a 1-2% margin. The root of this problem stems from the fact that the construction industry is still relying on tools and processes developed over 100 years ago to tackle the problems of today. The robot can make its rounds faster than a couple of humans with measuring tapes and clipboards. Someone equipped with a stationary laser ranging device that they carry from room to room just works too slowly. Using outdated data is one of the main problems for developers. This is confirmed by the case that was carried out in one of the companies. One of the first times startup took data on a site, the client was completely convinced everything they’d done was perfect. Scaled Robotics put the data in front of them and they found out there was a structural wall just missing, and it had been missing for 4 weeks. Thanks to Scaled Robotics’ technology such situations do not take place.
Simultaneous location and ranging (SLAM) tech Source: https://techcrunch.com/2019/12/11/scaled-robotics-keeps-an-autonomous-eye-on-busy-construction-sites/
Technologies that support people’s work
There is no doubt that the entire competitive advantage of Scaled Robotics lies in innovative technology. An advantage of simultaneous location and ranging (SLAM) tech is that it measures from multiple points of view over time, building a highly accurate and rich model of the environment. Automated Construction Verification system with scans from traditional laser scanners, can verify the quality of the build providing high precision information to localize mistakes and prevent costly errors. What is more, Automated Progress Monitoring helps track the progress of the construction project and provides actionable information for site managers to prevent costly errors. By comparing this to a source CAD model of the building, it can paint a very precise picture of the progress being made. Scaled Robotics also built a special computer vision model that’s suited to the task of sorting obstructions from the constructions and identifying everything in between.
What Scaled Robotics did is that they rethought the entire construction process. Their mission is to modernize construction with Robotics and Artificial Intelligence, thereby creating a manufacturing process that is lean, efficient and cost-effective.
Does Scaled Robotics have a chance to revolutionize the construction industry on a global scale?
Reading Time: 3minutesDuolingo is starting the year off strong. They have been named one of the top startups to work for, in the growing field of machine learning. These and many other insights are from a Crunchbase Pro analysis completed using Glassdoor data to rank the best machine learning startups to work for in 2020. Why is Duolingo a unique company?
Duolingo logo Source: https://www.duolingo.com/
Duolingo AI Research
Duolingo AI Research is one of Duolingo’s fastest-growing teams. They are using real-world data to develop new hypotheses about language and learning, test them empirically, and ship products based on their research. Duolingo has revolutionized language learning for more than 300 million people around the world. They keep on bringing creative, interdisciplinary ideas on how to deliver a high-quality education to anyone, anywhere, through AI.
Duolingo AI team logo Source: https://research.duolingo.com/
Tools and data from Duolingo
Duolingo use AI to adapt longer learning content to learners’ level. The startup is regularly releasing their internal tools to the public so everyone can read more about their research innovations. One of them is CEFR Checker. This tool determines whether texts are appropriate for beginner, intermediate, or advanced learners of English or Spanish. It works by analyzing vocabulary and highlighting words by their reading proficiency level according to the Common European Framework of Reference (CEFR). Duolingo uses interactive tools like this one to help people revise content (e.g., Podcasts and Stories) for particular levels.
The Duolingo CEFR Checker: an AI tool for adapting learning content Source: https://making.duolingo.com/the-duolingo-cefr-checker-an-ai-tool-for-adapting-learning-content
The Duolingo CEFR Checker: an AI tool for adapting learning content Source: https://making.duolingo.com/the-duolingo-cefr-checker-an-ai-tool-for-adapting-learning-content
Duolingo is also committed to sharing data and findings with the broader research community. SLAM Shared Task is an example project. It contains data for the 2018 Shared Task on Second Language Acquisition Modeling (SLAM). This corpus contains 7 million words produced by learners of English, Spanish, and French. It includes user demographics, morph-syntactic metadata, response times, and longitudinal errors for 6k+ users over 30 days.
Why people should consider working at Duolingo?
The language-learning app Duolingo is valued at $1.5 billion after a $30 million investment by Alphabet’s CapitalG. Bookings growth has risen from $1 million to $100 million in less than three years for the most downloaded and top-grossing education app worldwide. What is more, Pittsburgh’s first venture capital-funded $1 billion start-up plans to increase staff by 50% with the new funding. Duolingo has been adding user and revenue at an impressive pace, continuing to solidify its position as the No. 1 way to learn a language globally.
Why people should consider working in the machine learning field?
Demand reminds high for technical professionals with machine learning expertise. According to Indeed, Machine Learning Engineer job openings grew 344% between 2015 to 2018 and have an average base salary of $146,085 according to their Best Jobs In The U.S. Study.
It can be safely stated that Duolingo is developing very dynamically. There is also no doubt that the rapid growth of a startup also means the development of its employees.
Would you choose to join Pittsburgh’s unicorn if you had such a chance? What do you think about Duolingo’s contribution to the development of the education sector?
If you’re familiar with sci-fi anthology series called Black Mirror, you might think of one of the episodes from the 5th season titled “Metalhead”. Apparently, it’s not a fiction anymore, it’s our today’s reality.
The robot dog named Spot is an invention that Boston Dynamics started first developing out of MIT. According to the state’s nonprofit branch called American Civil Libertie Union, these robots are now working with the Massachusetts State Police’s bomb squad.
The ACLU accessed a memo of agreement document between the state and Boston Dynamics through a public records request.
The request letter that the organization wrote in the records is the following: “The ACLU is interested in this subject and seeks to learn more about how your agency uses or has contemplated using robotics.”.
The ACLU collected all the valuable information about the new partnership, including that Boston Dynamics leased the Spot robot dog to the police force for 90 days between August and November. Because there is no detailed information revealed for a public eye, we don’t know how they are exactly using these machines. The only information that a state police spokesman David Procopio provided about Spot is: “for the purpose of evaluating the robot’s capabilities in law enforcement applications, particularly remote inspection of potentially dangerous environments.”.
Michael Perry, Boston Dynamics vice president of business development stated, that the company is aiming to make Spot useful for different areas like oil and gas companies, constructions or entertainment.
Perry said he anticipates, that the police is using Spot by sending it into areas that are too dangerous for human being.
The abovementioned robot dogs are constructed for a general-purpose. They have an open application programming interface, which means, a warehouse operator or in this case, a police department can customize them with their own software. From what we can read on the internet, State police claims they didn’t use that feature yet.
Even though, Perry claims the robot won’t be used in the way that would harm or intimidate people, the ACLU, as well as the internet community, are worried about the situation. Currently, the major issue is the lack of transparency in the overall robotics program.
There are various conspiracy theories made by netizens. They mostly predict worst-case scenarios.
The question is whether this invention is safe for the human race. But let’s face the truth. Everything could be dangerous if used in the wrong way. If people working on these machines will program the algorithm allowing them to shoot to people, they’ll follow the order.
Personally, I’m amazed and don’t really know which adjective to use other than “amazing” in this case. I applaud Boston Dynamics for creating the algorithms of their breathtaking machines.
Reading Time: 3minutesEvery year, thousands of people around the world experience various neurological diseases like stroke and spinal cord injuries. Due to these diseases, many of them have paralysis. Such people are almost completely isolated from the social life, from any communication with doctors and relatives. That is why it is sometimes even impossible to avoid the usage of expensive equipment. There are already technologies in the world for reading thoughts and turning them into text messages at up to eight words per minute, but recently, scientists from the US state of Illinois have been able to improve this indicator. Artificial intelligence helped them greatly.
ScienceMag introduced this technology. In their article they described the experiment in which a new technology returns the possibility of communication to the patient with the so-called tetraplegia. During the experiment, patient with implanted electrodes imagined how he would move a hand if he wrote letters. Certainly, during this process brain showed some activity, which ,as the result, was remembered by AI. Then computer was able to remember the place of the activity in the brain connected with particular letter of the alphabet and was able to display symbols alternately that patient mentally traced on the screen.
According to the scientists, AI is able to recognize symbols with 95% accuracy. AI make several mistakes only with similar letters like «g» and «q», for example. Regardless this, now paralyzed person can text with the speed of 66 words per minute. To compare, the speed of texting of healthy person is 120 words per minute.
By the way, thoughts can be even transformed into the speech.
According to the editors of ScienceMag, researchers from Germany and the USA used some computational models based on neural networks, they reconstructed words and sentences by reading brain signals, as it was mentioned before. So the system is the same, they just observed areas of the brain at those moments when people read aloud, speak, or simply listen to notes.
During this research, they relied on data obtained from 5 people with epilepsy. The network analyzed the behavior of the auditory cortex (which is active both during speech and during listening). Then the computer reconstructed the speech data from the pulses received from these people. As a result, the algorithm coped with an accuracy of 75%.
Another team relied on the data of 6 people, that experienced the removing of brain tumor. Microphone picked up their voices when they read out loud different words. While this process, the electrodes recorded information from the speech center of the brain. Then computer compared the data from the electrodes with audio recording. Only 40% was correct as the result.
The third team from the University of California reconstructed entire sentences based on brain activity from three patients with epilepsy who read specific sentences out loud. Some sentences were correctly identified in more than 80% of cases.
Regardless such appealing results, the system has a lot of shortcomings and is needed to be adjusted. However, it will be developed even more, so millions of people will have an opportunity to text and to speak once again.
What we see above is a bunch of different average people right?
The catch is none of them is real…
Everyone knows that AI has been implemented in diverse applications, not only autonomic cars, management systems for factories or chatbots we see everyday. Recently it was implemented to a very specific application, namely creating nonexistent people and their faces. It might sound like a useless whim of bored programmers although it actually might be very useful for marketing agencies or graphic designers as they are royalty-free and available for everyone to use.
But how does that work?
Artificial intelligence has made it easier than ever to produce images that look completely real but are totally fake using Generative Adversarial Networks (GAN), a relatively new concept in Machine Learning, introduced for the first time in 2014.
The essential components of every GAN are two neural networks:
-Generator that synthesizes new samples from scratch, a random vector (noise) so the initial output is also noise.
-Discriminator that takes samples from both the training data and the generator’s output and predicts if they are genuine or counterfeit.
Over time generator, as it receives data from the discriminator, it learns how to create more realistic images. Moreover discriminator also learns and improves by comparing synthesized photos with real images. In other words one network generates a fake face, while another decides if it’s realistic enough by comparing it with photos of actual people. If the test isn’t passed, the face generator tries again.To see yourself how well it works go to: https://thispersondoesnotexist.com. Every time you refresh the page, you get a newly generated face.
And if you get bored with fake faces you can always admire some AI generated cats: https://thiscatdoesnotexist.com, but in my opinion sometimes it gets quite creepy as it’s not as advanced as the system mentioned above…
-Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio; „Generative Adversarial Nets”
-Tero Karras, Samuli Laine,Timo Aila NVIDIA; „A Style-Based Generator Architecture for Generative Adversarial Networks”