First of all, AI it is the future of our life. Nowadays we can easily see how this field is important and what role it plays in world economy system.
Especially, this area is very attractive for venture capitalists. In 2022 they have ploughed $67 billion into firms that claim to specialise in AI, according to Pitchbook, a data firm. Starting from the middle of 2021, the share of deals worldwide involving AI-related startups increased by 17%. I must say that this is very big breakthrough for such a period of time. Therefore it is not surprising that between January and October, 28 new unicorns(private startups valued $1 billion or more) have been minted.
It is a huge competition between the companies which are desperate to get their hands on AI talents. Derek Zanutto of CapitalG ,notes that large had spend years collecting data and investing in related technologies. So now they want to use this huge amount of data and AI gives them different ways to do that.
Unsurprisingly, that all huge organisations use AI to improve their software. For example, today Google uses AI to improve search results ,finish you sentences in Gmail and work out ways to cut the use of energy in its data centres, among other things.
Big companies quickly generate the plan how to sale some of Ai capabilities to their clients. Revenues from machine learning cloud service have doubled. In addition, upstart providers have wide spread, like Avidbots that leveraging data from a variety robot sensors.
In October Microsoft launched a tool which automatically wrangles data for users following prompts. All other huge companies may try to do something similar and several startups are already doing this. For example Google, presents in their video their first foundation model, which uses prompts to crunch numbers in spreadsheet and perform searches on property websites.
Other amazing thing that AI can do it is artificial colouring. In 2021 Nike bought a firm which uses such algorithms to create new sneakers design.
And the last example of how artificial intelligence is useful it is new technology of John Deere tractors which have some AI capabilities. This tractors can solve food problem in the word. IT is so important.
However, it is hard to say that AI is so profitable.
According to the McKinsey Institute’s survey: quarter of respondents to the survey said that ai had benefited the bottom line (defined as a 5% boost to earnings). The share of firms seeing a large benefit (an increase in earnings by over 20%) is in the low single digits—and many of those are tech firms, says Michael Chui, who worked on the study.
To sum up, this sphere is developing every day and become more and more necessary, but now it is not so great for large organisations in terms of increasing profits.
When we search for ,, Jobs of the future’’ on Google, many of the results are IT-related jobs and professions. This may come as obvious, because every young person has probably heard, at least once in his or her lifetime, some comment, from their parents, relatives, or teachers, about how software engineers, developers, robotics engineers, programmers, etc. are going to become the most-needed occupations. I am certain most of us have also heard about how robots will take our jobs away.
Analyzing the TECH industry and its pace of advancement, we can see it. IT, TECH, and AI-related jobs are the future. Moreover, they are the present. As for 2022, 8.9 million people are working in TECH in the US alone and workforce growth rates are expected to double those of other industries.
So what are exactly the jobs of the future? There are hundreds of them, but I’ll outline 3 of the most interesting ones, in my opinion.
1. Software developer/engineer
An obvious one, is the first position in any article regarding this topic. The majority of the human population uses the Internet and electronic devices, so companies need people who can develop and improve their software. Moreover, these developers will need to systematically become better and better. This occupation will require, among other things, the skill of advanced machine programming, because machine learning will eventually replace entry-level programmers
2. Computer network architect
They design and build efficiently organized computer networks for companies, as well as create plans and layouts for data communication networks. These networks can be as small as connections between 2 offices, or as big as building a cloud infrastructure accessible by customers across the world.
3. Remote or onsite robot operators
This is a perfect example of a position created for ex by AI. All around the world, there are robots working in warehouses that need 24/7 human supervision. While discussing AI, people most often fail to acknowledge the state of development of this technology. Yes, it is intelligent. Yes, it is able to do many tasks more efficiently than a person can. However, it also encounters countless problems along the way and has many bugs that still need fixing. While AI is improving, these intelligent robots need someone watching over them at the warehouse, or remotely, online to ensure they are working safely and efficiently. There comes the so-called robot-watcher, that in contrast to an engineer or a developer does not need to have any programming skills.
So will AI or new technology take our jobs away? The debate on this topis is in no way settled. On one hand, many jobs are at risk of being automated in the next 20 years. Still, economists argue that automatization will create new jobs. What is important to remember, but often overlooked, is that to get the whole picture and not become threatened by these innovations, we have to acknowledge the opportunities they’re offering us right now.
It is wonderful artificial intelligence has already been incorporated into a number of sectors of the film industry, including scriptwriting, graphic design, casting, and even project promotion. Films are being created using AI created algorithms and deep learning.
What role does AI play in movies?
It is becoming more typical in Hollywood to create scripts on computers. Algorithms based on machine learning and artificial intelligence can be used to write fresh scripts as well as character names and synopses for previously released films. In this case, a machine learning algorithm would get a ton of data in the form of numerous movie scripts or a book that needed to be adapted into a movie to produce a new script.
The movie business draws audiences to theaters through movie trailers. Studios must therefore create unique and entertaining trailers. The creation of these movie trailers can benefit from the use of AI by editors. For instance, a trailer for the movie Morgan was made using IBM Watson. A human editor can use the AI system to identify scenes with a lot of action or emotion and highlight them for the final trailer.
AI can also help editors when it comes to editing feature films. The AI computer can identify major characters thanks to its facial recognition technology, which helps human editors sort out such sequences.
Nowadays, promoting a movie is just as important as making it. Whether a movie is a success, or a failure will depend on the marketing and promotion tactics employed. AI can be employed to market films and make sure they are a financial success. An AI system can be used to analyze the viewership, the buzz surrounding the movie, and the actor’s success globally. This enables AI to schedule unique screenings and meet-ups with fans at particular venues to spark the audience’s interest in the movie even more.
Since the technology may be used to digitally connect performers to movies, the usage of AI for casting actors is not just confined to the pre-production phase. The system may be fed thousands of data points representing the facial features of performers expressing various moods. To preserve the actor’s natural expressions, the data can then be utilized to digitally overlay the actor’s face onto a body double.
The usage of this technology allows for the recording of scenes starring deceased performers. Using the technology, actors who have been in front of the camera for a long period can be made younger or digital characters can be created. This eliminates the need for multiple performers to portray the same role at various stages of life, and it preserves the character’s identity.
The introduction of artificial intelligence technologies in medicine is one of the main trends in the healthcare world. AI and neural networks can fundamentally change the entire world of medicine: transform the diagnostic system, promote the development of new drugs, improve the quality of medical services in general and reduce costs. In the future, the possibilities of AI are almost limitless.
Neural networks are actively used today in the development of intelligent systems due to their ability to learn. The mechanism of operation of artificial neural networks repeats the principle of biological ones. In a digital version, a neural network is a graph with three or more layers of neurons that are interconnected.
In the learning process, input neurons receive data, process it on the inner layer of the neural network, and the results are output. If the result obtained during the training process does not suit the researchers, they change the weight of the connections and retrain the network. At the same time, the success of the process and the reliability of the results depend on the amount of input data – the more of them, the better.
Neural networks can be applied in medicine in many ways. For example, a patient makes a request “headache”, “high temperature”, “chills”, and the neural network analyzes thousands or millions of cards of other people and, based on their diagnoses, can assume a disease of the person who made the request.
Naturally, the neural network cannot be 100% sure that the patient has, for example, the flu with the above symptoms, but it assumes such a diagnosis in accordance with the conclusions of doctors on other medical records. Today, many technologies for medicine have been developed based on neural networks, and some of them are already actively used in clinics around the world.
The development of AI today is a priority for many countries around the world. If we consider the introduction of smart systems in the medical field, then, first of all, their benefit will be to increase the accuracy of diagnosing various diseases.
The practice and experience of a doctor may not be enough to timely identify a particular problem in the human body, while a neural network with access to a huge amount of data, advanced scientific literature and millions of case histories will be able to quickly classify any case, correlate it with similar problems in other patients and suggest a treatment plan.
Artificial intelligence can take on all the tasks that distract medical staff from their main job – saving human health and life. Programs can select wards, search for available equipment, monitor the condition of medical equipment, etc.
AI today often shows higher accuracy in making diagnoses and performing other tasks than a doctor. If the doctor and AI work together, then the probability of errors is reduced almost to the level of statistical error.
Investments in AI in medicine are extremely important today – they provide an opportunity to develop the field, and in the future, completely change the entire face of healthcare in the world, make it more reliable, efficient, comfortable and safe for humans.
However, not everything is going smoothly at the moment. The introduction of artificial intelligence systems in the medical field has problems and disadvantages that should not be forgotten. There are several barriers to AI in medicine.
The creation and implementation of artificial intelligence systems requires serious funding. The high cost is largely due to the need to train the program, adjust it to the data accumulated in a particular medical institution. In addition, it requires special maintenance, which will require a qualified and motivated team. And last, but not least, for efficient and quick work of AI, serious computing power is needed, which may simply not be available in an ordinary medical institution.
Despite the serious difficulties of implementing AI systems, the prospects for their use encourage us to look for solutions to overcome any obstacles. Highly qualified specialists from different parts of the world, talented researchers, excellent mathematicians, doctors, representatives of pharmaceutical companies, etc. are constantly working on the development of this area. However, despite the development of AI, the role of a person in the field of healthcare still remains a leader.
As Christmas is fast approaching, we are starting to hear the holiday classics everywhere. From All I Want for Christmas at the Supermarket to Rockin’ Around the Christmas Tree on the radio – Christmas songs are virtually unavoidable.
But I’d like you to think of the artists behind these songs – most of Spotify’s Christmas Hits playlist is comprised of songs recorded or written before the first manned mission to the Moon.
Naturally, many of the authors and performers listed in the credit sections of these songs are long gone – Bing Crosby died in 1977, Nat King Cole passed away in 1965 and Frank Sinatra departed in 1998.
It’s a shame that we won’t be able to hear any new songs from them.
But what if it doesn’t have to be that way?
That’s where OpenAI’s Jukebox comes into play.
Debuted in April 2020, the technology analysed over a million songs, along with their lyrics and metadata (release date, genre, mood) and is now capable of generating full tracks in the style of any well-known artist. The company shared a range of demos, designed to resemble artists such as Alan Jackson, Katy Perry, or Elvis Presley. Most notably though, the song that stands out is “Hot Tub Christmas”, in the style of Frank Sinatra. While the “recording” quality might not be perfect, the timbre of the “singer’s” voice is eerily similar to that of the legendary American singer.
Though the lyrics have been co-written by a language model and OpenAI researchers, the chord progressions and instrumental cohesion are very well replicated in the computer-generated mp3s. The team behind Jukebox is aware of the software’s faults, as “[…] the generated songs show local musical coherence, follow traditional chord patterns and can even feature impressive solos, we do not hear familiar larger musical structures such as choruses that repeat.”2
Jukebox doesn’t analyze the actual notes in the songs, but only relations between pitch over time. An upside of such an approach is the possibility of highly realistic human voice creation. For their future endeavors, OpenAI plans to integrate a note-to-MIDI technology which would detect the rhythms and exact notes, which would allow for a deeper, more natural, and precise song creation – perhaps with the use of software instruments or synthesizers for higher file and sound quality.
Jukebox, at this point, is treated by the music industry as a mere curiosity, with no real applications – even despite a new feature of creating an acapella file from user-generated lyrics being introduced. This dynamic might change in a relatively short time if Jukebox becomes able to create classically written songs, providing the notes, rhythms MIDI files behind them. With such possibilities, songwriters and producers could streamline their music creation processes and massively increase their output.
The current market situation is visualized by the fact that most of the investments poured into creative AI come from Venture Capital and Tech Corporations – not from the Music Industry.
At this point, it does not seem like any songwriter or producer jobs are endangered. High quality audio files have incredibly many timesteps which encode data – a standard 4-minute-long song in a .wav 44.1 kHz file will contain over 10 million timesteps. Currently, a song needs to be almost fully produced and designed by a professional before being rendered into such a complicated audio file.
The process of AI art generation is slowly being integrated into commercial culture, with the generator Midjourney winning the Colorado State Fair Fine Arts Competition. Jukebox and similar technologies are often criticized for taking away the humanity out of art, while some perceive it as an opportunity to augment their creations through technology.
To me, it seems inevitable that Artificial Intelligence will be widely used in the music industry – major labels will push for anything that can give them a competitive edge in business.
We must also take into consideration the legal implications of Jukebox. Our laws don’t include AI “artists” and thus, there might be copyright implications. Who is the de facto author of such a song? The AI developer, or the person who entered prompts into the technology to create a specific tune? How do we split royalties for such songs? Furthermore, is it acceptable ethically to expand dead artists’ catalogues?
In conclusion: AI is slowly entering into creative arts, especially the music industry, thus expanding songwriters’ and producers’ output and possibilities. It appears that in this case, the risk of actual people being replaced by technology is lower than in easily automated and routine operations.
This time, I’ll ask the classic question: do you think that AI art is proper art? Should it be publicly disclosed that a song or a painting was generated through Artificial Intelligence?
History of autonomus vehicles (AV) is quite interesting. In 2010s companies like Mercedes, Ford or BMW were trying to develop self-driving cars, although unsuccessfully. A year ago closest of accomplishing this idea was Tesla with their economical car. However, still there has to be a human driver behind the wheel to control this car due to regulations.
Nevertheless, in 2021 Raquel Urtason has started company that can revolutionize AV market. This company, which is called Waabi, are developing self-driving truck but in slightly different way than it was done by the others. Instead of testing software in real world, Raquel and her team have decided to test it in another program. They are testing programmed AI in simulation which is magnificently similar to the open roads in the real world. Quoting Raquel Urtason: “Typically in the industry, you will build a prototype, collect data, make your software compatible and then you will discover issues and build the next generation a year and a half, or two years later. With Waabi our hardware design is done in simulation so you already built that next generation.” Because of this approach Waabi could be the first company to release their product before the competitors. What is more Waabi is working with truck drivers who know the industry and the market which gives Waabi competitive advantage. With different attitude presented by the CEO, this company can accomplish something amazing way before their competitors.
I am writing about this company owing to their’s new approach which is very fascinating. We’re talking about new generation technology which apparently is going to speed up placing this product on the market. However, this “speeding up” doesn’t mean very soon, due to many hindrances ahead of Waabi. Firstly, regulations. In this field of business there are a lot of them. Primary thing is that people, especially government need to be convinced about safety of this technology which will not be easy. Another problem is how much money people will need to pay for that technology. If the price is too high, it will be unworthy business for buyers. We can speak about many advantages of such a solution but money is the most important factor in business.
On the other hand, technology is getting more and more advanced and people’s mindset is changing all the time so let me tell you some positive aspects of this technology which can help Waabi to overcome some obstacles. The most important of them is safety. Nowadays there are many drivers who are irresponsible on the road. Presence of refined AI system in AV could simply decrease a number of car accidents on the roads which is a very serious problem is our world. Another positive thing, that Waabi is trying to implement, is that those people who are now truck drivers would not have to resign from their foregoing work and retire peacefully.
In my opinion we won’t see Waabi’s self-driving cars on the roads very soon, although it will eventually happen in the future and I believe in Raquel’s company . Waabi is getting closer but this kind of product needs to be done as precisely as it’s possible so it must take some time to do it perfectly. How about you? What do you think about this kind of car on the roads? Will the Waabi succeed?
Do you hate going outside? Are you an introvert who prefers to avoid human contact? Do you wish to stay at home but miss the experience of doing groceries? Worry not because Walmart is on it.
Walmart, Inc., was founded in 1962 and focused its early growth in rural areas, avoiding direct competition with retail giants such as Sears and Kmart. Walmart became one of the largest grocers in the United States within a decade of opening the combination of grocery and merchandise Supercenters. Emphasizing customer attention by implementing direct mail advertising, low-cost imports, and focusing on the efficiency of its distribution networks through regional warehousing allowed Walmart to become the largest retailer in the United States in 1990. Walmart’s revenues doubled by 1995 after the owner’s passing, and by 1999 the company had become the largest private employer in the world, as well as the largest corporation in the world.
The company’s development has reached its peak, and now to stay at the top, they are trying to come up with more innovative and shocking customer-gaining solutions. While researching this topic, I have met with a question: “Walmart is joining the metaverse. Are we ready?” – Are we ready for what? Does Walmart’s marketing team think that people do groceries for fun? Is this really where this world is headed? It’s not progress; it’s regress. All this makes me wonder what the Walmart shopper wants to buy in a virtual world anyway?
According to William White, Walmart’s marketing chief, the company will use Roblox as a testing ground as it considers other moves in the metaverse. The experiences, according to him, were created with the next generation of consumers in mind, notably Gen Z, who is widely seen as being between the ages of 25 and under. White stated that the business hopes to gain knowledge from the collaboration.
Currently, Walmart Land and Walmart’s Universe of Play are the two primary game experiences available on the Roblox platform. The store is also experimenting with new methods of customer contact, especially in light of how the pandemic has altered consumer purchasing patterns and increased their use of social media, apps, and gaming websites. Walmart is attempting to bridge the gap between the online and offline worlds. A music festival featuring Madison Beer, Kane Brown, and Yungblud will perform during the virtual event “Electric Fest” in October on Walmart Land; a ton of different games and a shop of virtual goods, or “verch,” that mirrors what customers could find in Walmart’s physical stores and on its website are all part of the retail giant’s first excursion into the virtual world. The big-box store has also held live events that were shoppable and streamed on TikTok, Twitter, and YouTube. Additionally, it has launched a service on Pinterest that uses augmented reality to let customers envision how furniture or other decors might appear in their homes.
For the time being, Walmart won’t profit from its immersive experiences. Instead, players may use tokens and other prizes from Roblox to purchase virtual goods. National brands, such as Skullcandy headphones and the toy company L.O.L. Surprise!, were integrated into the experiences based on demand from the younger gaming demographic of Roblox. However, White suggested that in the future, Walmart may profit from it by charging a brand for inclusion or by attempting to convert users’ virtual experiences into in-person or online shop visits.
Walmart Land has a virtual changing room featuring clothing from its exclusive fashion lines, such as Free Assembly, as well as an obstacle course made up of gigantic goods from the retailer’s Gen Z-focused cosmetic brands, such as Bubble skin care products and Uoma by Sharon C makeup.
During the Covid epidemic, Roblox attracted a lot of new users and made its stock market debut. The gaming platform reportedly saw an increase in daily active players from 32.6 million in 2020 to over 52 million. Although the firm claims it is drawing users of all ages, traditionally, it has attracted more young children and teenagers. Although Roblox has a market cap of roughly $21.2 billion, its shares have fallen by about 66% so far this year.
We constantly risk becoming lost in the future race, especially when the newest technological revolution is powering it. We can get caught up in designing these next-generation products and experiences, as we have all seen before, only to find that no one actually wants them. It appears that no one wants to be left behind – but are we really clear about where we are going with the metaverse?
One of the biggest artificial intelligence trends we’re seeing is the increased use of AI technology for cybersecurity and surveillance.
Many believe that the introduction of artificial intelligence in cybersecurity technology will be a kind of revolution and this will happen much sooner than one might think. In fact, in the future, we are likely to expect only gradual improvements in this area. But even these steps towards absolute autonomy still go far beyond our capabilities in the past.
When looking for new ways to apply machine learning and artificial intelligence in the field of cybersecurity, it is important to outline the range of modern problems in this area. AI technologies can be useful for improving many processes and aspects that we have long taken for granted.
A significant part of cybersecurity weaknesses is related to the human factor. For example, even with a large IT team, properly configuring a system can be an incredibly difficult task. Computer security is constantly improving, and today this area has become more complex than ever. Adaptive tools can help troubleshoot issues that arise when replacing, modifying, and upgrading network systems.
Manual labor efficiency is another cybersecurity issue. A manual process cannot be replicated exactly the same every time, especially in a dynamic environment such as today’s cybersecurity landscape. Customizing multiple corporate endpoints is one of the most time-consuming tasks. After initially provisioning a device, IT pros often have to go back to the device to fix configurations or update settings that can’t be changed remotely.
It also should not be forgotten that the nature of threats is constantly changing. If people are responsible for responding to them, their speed of action can be slowed down when faced with unexpected problems. A system based on AI and machine learning technologies can work under the same conditions with minimal delay.
Threat response time is one of the most important performance indicators of a cybersecurity service. Attacks are known to move very quickly from exploitation to deployment. In the past, before launching an attack, attackers had to manually check all vulnerabilities and disable security systems and sometimes this process could take weeks.
A person’s reaction may not be fast enough, even if the type of attack is well known. This is why many security teams are more focused on remediating successful attacks than preventing them. Undetected attacks represent a separate danger.
Machine learning technologies are able to extract attack data, group it and prepare it for analysis. They can provide reports to cybersecurity professionals to facilitate data processing and decision making. In addition to reports, this type of security system can also offer recommended actions to limit further damage and prevent further attacks.
Ideally, the role of AI in cybersecurity comes down to interpreting patterns discovered by machine learning algorithms. Of course, modern AI is not yet able to interpret the results as well as a human. This area is actively developing, a search is underway for algorithms similar to human thinking. But the creation of real AI is still far away. Machines have yet to learn how to rethink situations in abstract terms. Their creativity and ability to think critically is far from the popular image of ideal AI.
Faces are as distinctive as fingerprints and can disclose a lot about our personalities, age, health, and emotions. Numerous face features change as a person grows, as a result of, among other things, various settings, activities, and nutrition. The scientific community has made numerous successful attempts to imitate and comprehend how various people’s face features change with age. The definition of facial progression technology, also known as age progression technology, is the artistic re-rendering of a face with effects of natural aging or rejuvenation at any future age for a specific face. This technology is applicable in a variety of situations, including cross-age face recognition, age estimates, and entertainment. The Face App application, which allowed users to imitate their faces throughout different ages, with the most popular one being the advanced age filter, is one very well-known example that the majority of social media users in the world could be familiar with.
Aside from amusement, face progression software can be utilized in essential applications such as detecting missing children. Finding a missing child after an extended length of time, such as eight or ten years, might be difficult due to considerable changes in face features. Face progression technologies, for example, can assist parents or officials in estimating the change in facial features of a missing child, making it easier for them to be identified.
One institution in Africa, Kenya to be specific, has used AI to its full potential. Lost Child Kenya is enhancing their technique of identifying missing children by utilizing age-progression technologies, specifically face regression. Missing Child Kenya, established in July 2016, is a non-profit community-led effort that uses technology and crowdsourcing to search for, trace, and reunite missing, displaced, lost, and found children. According to Maryana Munyendo, the organization’s founder and executive director, Missing Child Kenya has found and reunited 496 children with their families, committed 73 children to Government homes for safe care and custody, documented 21 children as deceased, and is still looking for another 190, totaling 780 children in the case files.
Missing Child Kenya struck a historic agreement with the Italian Missing Children Institute to give support for forensic imaging, photographic manipulation techniques, facial reconstruction techniques, adult age progression, and photo repair of their missing Kenyan children database. The organization inked a historic cooperation with the Italian Missing Children Institute to give help for forensic imaging, photographic manipulation techniques, facial reconstruction techniques, adult age progression, and photo repair of their database of missing Kenyan children. Applying this method involves updating the mug photographs of the missing youngsters every two years and, for those above the age of 18, every five years. In this approach, it is feasible to disseminate pictures that are age-appropriate for the missing child and so improve the effectiveness of the search. To start the project, the group has so far collaborated with four families. Anita Njeri Nyambura went missing in 2016, and it was the first case that Kenya’s Missing Child Unit dealt with. They collaborated with the family to create photographs of the missing child by imagining how she might have changed over time.
In the 21st century, artificial intelligence can do everything. Painting pictures, driving cars, helping doctors in medicine, and what about music? Does AI know how to compose music and write lyrics for songs?
In truth, an artificial intelligence can do this, too. Not with a soul, not with such a huge meaning, as a human does, because a robot has no feelings, but still knows how and even writes lyrics for music.
How exactly do neural networks create music? The general principle is that the neural network “looks” at a huge number of examples and learns to generate something similar. But it is impossible to formulate a task for a neural network to write beautiful music, because it is impossible to create a formula that will fulfill this task, since this is a non-mathematical requirement. It is interesting only when the neural network reproduces something that exists. The approach by which this music is created is called an auto-encoder (Generative Adversarial Network). It works like this: We compress the music at the input into a very compact representation and then expand it back to its original form. A compact representation does not allow you to remember everything completely that was in the music. Therefore, the neural network is forced to put some common properties for music into the software part. And then, when generating music, we take a random sequence of numbers, apply the rules about the knowledge of music learned by the neural network and get a piece of music that looks like a human.
Turing Music Test How to understand that a piece of music created by a machine is really worthy of our attention? To test the work of artificial intelligence systems, a Turing test was invented. His idea is that a person interacts with a computer program and with another person. We ask questions to the program and the person and try to determine who we are talking to. The test is considered passed by the program if we cannot distinguish the program from a person. For example, the DeepBach algorithm was tested, which generates notes in the Bach style. More than 1.2 thousand were interviewed. People (both experts and ordinary people) who had to distinguish the real Bach from the artificial one. And it turned out that it was very difficult to do this — people can hardly distinguish between music composed by Bach and created by DeepBach.
What about the lyrics? Well, we’ve sorted out the musical compositions, but what about the lyrics for the songs? Can artificial intelligence compose poetry? Yes, and this task is even easier than writing melodies, although there are also enough difficulties here — the algorithm needs not only to “come up” with a meaningful text, but also to take into account its rhythmic structure. In 2016, the developers of Yandex released the album “Neural Defense”. It includes 13 songs, the lyrics for which were composed by artificial intelligence. A year later, the album “Neurona” was released with four songs in the style of Nirvana, the verses for which were also generated by neural networks.
Thus, we see that artificial intelligence is able to write even music and lyrics for it, but will it ever replace songs written by a person in which feelings and life situations were invested?