Category Archives: AI

Instruction for hacking an electric car

Reading Time: 3 minutes
What Happens When Hackers Hijack Your Car ... While You're in It ·  TeskaLabs Blog

Our cars are getting smarter from generation to generation, however, unfortunately, this also applies to thieves who intend to steal them. Hackers have already found vulnerabilities in electric vehicles, and their attacks can have serious consequences. As electric vehicles become more sophisticated and connected to the Internet, it is expected that the risk of hacking and cyber attacks will only increase.

One of the main problems with electric vehicles is that they are equipped with numerous sensors and controllers connected to the Internet, which makes them vulnerable to cyber attacks. Hackers could potentially gain remote access to these systems and manipulate them in a way that could cause serious harm, such as disabling brakes or changing steering. In some cases, hackers could even take control of the entire car, putting the driver and passengers at risk.

Another problem is that hackers can attack charging stations for electric vehicles. These stations are also connected to the Internet and often use wireless connectivity to connect to electric vehicles. Hackers can potentially gain access to these systems and manipulate them.

Recently, a security expert discovered a way that allows two attackers to unlock, start and drive away a Tesla Model Y electric car in a matter of seconds.

Hackers specializing in hacking Tesla electric vehicles have identified a vulnerability that allows them to hack NFC relays. However, not everything is so simple: in order to hack the system, thieves need to work in pairs and get close to the NFC chip or smartphone. Josep Pi Rodriguez of the Seattle-based computer security firm IOActive found that attackers could use Tesla’s key technology called NFC (Near field Communication) to gain control of a vehicle, designed to give car owners the ability to access them by touching an NFC card to the middle rack. Rodriguez found that if one thief approaches a critically small distance to the driver when he gets out of the car, for example, to a store or bar, and the other is standing by the car, it will be possible to open the door and start the car.

  • Here’s how it works: a thief standing at the car uses a special device to convince the car to send a “call” to the driver’s NFC card, but then transmits this call via Wi-Fi or Bluetooth to a mobile phone belonging to a second thief, who is watching the driver at this time. The second thief keeps this phone near the driver’s pocket or bag where the NFC card is stored, and when this NFC card responds, its signal is transmitted to the thief standing by the car via a mobile phone.

Tesla previously required drivers using an NFC card (not a keychain) to unlock their cars to place the card between the front seats in order to turn on the transmission. But after a recent software update, this requirement has been lifted. Tesla also offers the option of using a PIN code, which means that car owners must enter a four-digit code before starting the car, however, a fairly small number of owners activate it. In the end, even if this additional protection prevents thieves from leaving by car, they will still be able to use the hacking method described above to open the doors and steal any valuables inside the cabin.

To solve these problems, automakers and cybersecurity experts are working to develop stricter security measures for electric vehicles. This includes the introduction of more advanced encryption technologies, the development of secure firmware and software, as well as regular updates and patching of systems to eliminate any vulnerabilities found.

Electric car owners can also take measures to protect their cars from cyber attacks. This includes regularly updating software and firmware, using strong passwords and two-factor authentication, and not using public Wi-Fi networks when accessing the Internet from their cars.

Thus, as electric vehicles become more popular and widely used, the risk of cyber attacks and hacking is expected to increase.

This is a serious problem that could have serious consequences for electric vehicle owners, automakers and the general public. It is important that automakers and cybersecurity experts work together to develop stricter security measures for electric vehicles, as well as educate owners on the steps they can take to protect their cars from cyber attacks.

Sources: https://www.aljazeera.com/amp/economy/2022/1/12/teenager-says-he-remotely-hacked-into-more-than-25-teslas

https://www.indiatimes.com/amp/technology/news/tesla-cars-may-be-unlocked-and-hacked-using-bluetooth-devices-researchers-find-569861.html

https://www.entrepreneur.com/business-news/this-hacker-exposed-a-new-way-to-steal-a-model-y-tesla/435323

Tagged , ,

Expensive Google’s mistake

Reading Time: 2 minutes
Google Bard AI mistake just cost Google over $100 billion | Tom's Guide

 Let’s start with the fact that the inhabitants of the world of information technology did not have time to discuss the new chat bot ChatGPT, which they had already dubbed the “Google killer”, as a competitor launched its own bot Bard, also based on the principle of artificial intelligence.
Bard is based on Google’s already existing large language platform called Lama, which is said to be so human-like in its responses that it seems reasonable.

On the day of the presentation of a new chatbot, Google makes a fatal mistake that costs them $100 billion.

On February 6, Google announced its AI chat-bot Bard, a ChatGPT competitor from OpenAI, which will be available in the coming weeks. However, in the promotional video, the technology made a mistake by providing false information as a result of the request.

Google has published a GIF in which the chatbot answers the question: “What new discoveries of the James Webb Space Telescope can I tell my 9-year-old child about?”. Bard offered three options, including the claim that the telescope “took the first photographs of a planet outside the solar system.”

Astronomers immediately reacted to the presentation and NASA wrote about this situation on their website. The thing is that the first picture of the exoplanet was taken back in 2004, which indicates errors in the video presentation of the chatbot.

“I’m sure Bard will still amaze us, but for the record: JWST did not take the “first image of a planet outside our solar system,” astrophysicist Grant Tremblay wrote on Twitter.

Bruce McIntosh, director of the UC Observatory, also pointed out the error: “I speak as someone who photographed an exoplanet 14 years before the launch of JWST. Don’t you think a better example could be found?”

Tremblay tweeted that the biggest problem with chatbots like ChatGPT and Bard is their tendency to confidently provide incorrect information as fact. And often they just come up with data, because in essence they are autocomplete systems.

Everyone is well aware that there is already a lot of false information on the Internet, but everything is complicated by the desire of Microsoft and Google to use these tools as search engines. Where the answers of chatbots acquire the authority of a smart machine “know-it-all”.

The promotional video with the error was viewed by 1.6 million people on Twitter. And almost immediately after its publication, Alphabet shares fell by 9%, and the market value decreased by $100 billion.

Sources and references: https://www.bbc.com/news/business-64576225.amp https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo https://amp.cnn.com/cnn/2023/02/08/tech/google-ai-bard-demo-error/index.html

Tagged ,

Fine-tuning GPT3

Reading Time: 3 minutes

to improve performance in specific tasks and domains

GPT3 is a powerful language generation model, making it ideal for building chatbots, conversational interfaces, as well as other driven applications such as automated content creation. We can also use GPT3 to generate code, art and even to compose music. The main advantage in most cases is the ability to ask the right questions. While in more advanced projects, knowing how to use the GPT3 API to generate automated tools can greatly speed up our business or academic tasks. 

This is one of the reasons why tuning GPT3 is so important. Doing this for specific tasks or domains involves training a model on a similar data set that is specific to the task or domain of interest. This process is also known as transfer learning and allows the model you are using for chat to adapt to new tasks by adjusting the weights of the free-training model to better fit the new data and emphasize a specific topic. 

Take the example of tuning GPT3 for sentiment analysis. If we wanted to use chat for this we would train the model on a data set or text labeled with sentiment. Remembering that sentiment can be positive, negative or neutral. This is useful for, among other things, the work of a data analyst when he or she wants to find the sentiment of different tweets against, for example, a statement or the general character of a given politician. 

GPT3 tuning can be done using various steps, and learning libraries such as TensorFlow or PyTorch. This can be done by adjusting the parameters of a pre-trained model using new data. This process can take anywhere from a few hours to a few days, depending on the size of the dataset and computational resources.

During a usual chat with ChatPT3, you may notice that the chat remembers what you asked it a few messages ago and is able to make changes based on its pre-generated statements. The chat also learns from our conversations. Tuning works in almost the same way, but on a much larger scale. Moving into the programming area, we can tune ChatGPT3 using the OpenAI API model, which includes GPT3 chat features. 

The tuning process requires access to a dataset and a development environment to train the model, which is not directly provided in the GPT chat. So, to tune GPT3 you need to create an OpenAI API key and use it to evaluate the GPT3 model. 

You can then use the API to tune the model on your specific task or domain by providing a dataset or using the API to train and update the model. You can always get pre-trained models that are tuned for specific domains or tasks, which are available from a number of different providers such as https://huggingface.co/models and more. You can use these models without training your own data set and simply add them as an add-on to GPT3 chat. 

Another advanced tuning technique is data augmentation, which is a technique that is used to improve the performance of GPT3 models. This technique is used to artificially increase the size and variety of the training data. This can be done by using various techniques such as adding noise to the data, rotating and inverting the images and creating new data by combining existing and old data. This can help to make the model more robust and reduce overfitting. 

For example, using data augmentation techniques artificially increases the size and diversity of the medical dataset and can help GPT3 learn medical-specific language and terminology. Transfer learning, on the other hand, allows the model to adapt more efficiently to a new task or domain. I strongly encourage you to experiment with ChatGPT as it can save us many hours of tedious work and improve the end result. 

Source: OpenAI ChatGPT Master for Business and Software Applications

When life gives you data, make an analysis!

Reading Time: 2 minutes

… or a day in the life of a data analyst 

Data analysts, who are entrusted with converting raw data into insights that can be used to assist decision-making, are the unsung heroes of the data industry. Large amounts of data need to be gathered, cleaned and analyzed as part of their profession in order to find trends, patterns, and linkages that could otherwise go missed. In a time when data is king, data analysts are essential in assisting businesses in making sense of the massive volumes of data they produce every day.

A data analyst’s day can be varied and difficult, involving everything from gathering and cleaning data to examining and visualizing it to developing and testing predictive models. In addition to having a solid grasp of statistics, data visualization, and machine learning, data analysts must be able to concisely and clearly convey their findings to stakeholders. In order to comprehend the business context of the data and guarantee that their research meets the objectives of the stakeholders, they must also be able to work collaboratively with cross-functional teams that include engineers, product managers, and business analysts.

One of the most important tools in the data analyst’s arsenal is the programming language Python, which has become the de facto language for data analysis and data science. Python offers a wealth of libraries and tools that make it easy to perform data analysis tasks, such as collecting data, cleaning data, exploring data, and building predictive models.

Python, a programming language that has established itself as the standard for data analysis and data science, is one of the most crucial weapons in the toolbox of the data analyst. Python has an abundance of modules and tools that make it simple to carry out data analysis tasks like gathering data, cleaning data, examining data, and developing predictive models.

Here are some of the most common Python libraries used for data analysis:

  • Pandas: A fast, flexible, and powerful data analysis and manipulation library, used for tasks such as data cleaning, aggregation, and transformation.
  • Numpy: A library for numerical computing in Python, used for tasks such as linear algebra, random number generation, and array operations.
  • Matplotlib: A 2D plotting library for Python, used for tasks such as data visualization, histograms, and scatter plots.
  • Seaborn: A data visualization library based on Matplotlib, used for tasks such as regression plots, heatmaps, and violin plots.
  • Scikit-learn: An open-source library for machine learning in Python, providing a wide range of algorithms for classification, regression, clustering, and dimensionality reduction.
  • TensorFlow: A popular open-source platform for developing and training ML models, used for a wide range of tasks including image recognition, natural language processing, and time series forecasting.
  • PyTorch: An open-source ML framework for building and training deep learning models, used for tasks such as image classification, sequence analysis, and recommendation systems.

To conclude, data analysts are essential to helping businesses understand the enormous amounts of data they produce every day. To transform data into insights and encourage reasoned decision-making, they combine technical abilities, like Python programming and machine learning, with soft skills, like cooperation and communication. The world of data is an exciting and gratifying place to be, and there are endless opportunities for growth and development whether you are an experienced data analyst or just getting started.

Sources:

https://pandas.pydata.org/docs/

https://numpy.org/doc/stable/

https://matplotlib.org/stable/contents.html

https://seaborn.pydata.org/

https://scikit-learn.org/stable/

https://www.tensorflow.org/

https://pytorch.org/

The Design and Engineering Behind Atlas: What Makes it So Special

Reading Time: 2 minutes

Robotics and Artificial Intelligence (AI) have been around for decades, but these industries have recently experienced a surge of growth in recent years. One of the most impressive robots created is called Atlas from Boston Dynamics.

Atlas is a humanoid robot that was developed by Boston Dynamics in 2016. It stands at 180 cm, weighs 80 kg, and is powered by hydraulics and electric motors. The primary objective of Atlas is the creation of a robot that can navigate challenging terrain while carrying out tasks such as lifting objects or navigating obstacles. Atlas’s incredible agility and motion smoothness set it apart from other robots, which allows it to jump over logs, balance on one leg, walk up stairs and even do backflips.

Atlas has certainly earned its place as a leader among robots. Thanks to recent advances in programming, it can now use human-like hands with sensors to manipulate objects more precisely and dexterously than before. Boston Dynamics robot can now adapt to different object shapes and sizes, allowing it to securely grip and hold them while performing various tasks with great precision. This improved capability has greatly expanded the range of duties Atlas can undertake, making it an even more powerful tool for diverse applications.

Demonstration of Atlas manual abilities

In addition to its physical capabilities, Atlas also boasts advanced AI algorithms that enable it to learn from its environment and make decisions based on the data it collects. This makes it possible for the robot to carry out complex tasks without direct human supervision. For example, researchers have used Atlas to complete challenging obstacle courses with minimal human assistance.

The rise of humanoid robots like Atlas has brought with it incredible potential for the future. Not only can such developments improve the overall capabilities of robots, but they can also help us better understand how to create machines that work safely next to humans and provide efficient solutions to many problems. From responding to natural disasters and using robots in construction sites to providing assistance during complex surgical procedures, the development of humanoid robots could have far-reaching implications on our daily lives and our ability to solve long-standing challenges.

Atlas from Boston Dynamics stands as a remarkable example of human innovation, representing a huge leap forward by combining cutting-edge technology and advanced artificial intelligence algorithms. Its ability to move freely through complex environments and interact with objects as well as perform tasks shows it is an impressive feat of both engineering and programming. In our ever-changing world, Atlas has the potential to revolutionize many industries and drastically alter how we live and work in years to come due to its capabilities. It truly opens up vast possibilities for the future, broadening our ideas of what can be achieved.

ChatGPT’s new competitor

Reading Time: 3 minutes

More powerful than ChatGPT': Microsoft unveils new AI-improved Bing and Edge  browser | ZDNET

Bing is an updated Microsoft search service based on artificial intelligence. It’s based on the OpenAI GPT language model, but Bing is newer than ChatGPT 3.5. Microsoft says it’s not just an updated search engine, but a new artificial intelligence-based search channel with a new chat interface that offers better searches, more complete answers and more relevant search results, so readers can spend less time on the page. Artificial intelligence will revolutionize every category of software, including the largest category — search. Bing can also create content and inspire creativity. Microsoft said: “The new Bing can generate useful content. Create a 5-day itinerary for your dream vacation to Hawaii, including links to write emails, book travel and accommodation, prepare for interviews and create quiz questions to help you The new Bing also cites all sources so you can see links to the web content you link to.”

Microsoft has also announced changes to Edge. Artificial intelligence has been added to Edge to help people do more with search and the internet. As for the new Bing search and the new Edge browser, Microsoft highlights some key features:

  • The best search. The new Bing offers an improved version of familiar search, providing more relevant results for simple things like sports scores, promotions and weather, as well as more complete when you need it. It also provides a new sidebar for displaying responses.
  • Full answer. Bing searches the web for results to find and summarize the answers you are looking for. For example, you can get step-by-step instructions on how to replace eggs with another ingredient in your current cake without looking at multiple results.
  • New chat. For more complex searches, such as planning a detailed travel itinerary or choosing a TV to buy, the new Bing offers a new interactive chat. Chat allows you to narrow down your search until you get the full answer you are looking for, asking for details, clarity and ideas. Links are available, so decisions can be made immediately.
  • New Microsoft Edge interface. We have updated the Edge browser with new artificial intelligence features, a new look and added two new features: chat and messaging. Use the Edge sidebar to request summaries of long financial reports to get the main conclusions, use the chat function to request comparisons with competitors’ financial reports, and automatically place them in a spreadsheet. You can also ask Edge to help you create content, such as posts for LinkedIn. Then you can get help updating the tone, format and length of your message. Edge can understand the web pages you are viewing and adapt accordingly.

However, Google issued a warning to its departments, and even the founders and shareholders of the tech giants Larry Page and Sergey Brin stepped up. On Monday, the company introduced its own alternative to ChatGPT called Bard. Google CEO Sundar Pichai called the software an “experimental artificial intelligence service” that is still being tested by a limited number of users and employees of the company and will be released to the general public in the coming weeks.

Microsoft Brings ChatGPT-Like AI Features to Bing, Edge - My TechDecisions

Thus, Bing have been developed to facilitate research and increase their reliability. Starting with the chat mode, you can ask literally any question using an interface very similar to GPT chat, and the answer will be sent in seconds.

Interestingly, when searching for information in real time on the Internet, responses are sent directly from various thematic sites. The source of information for constructing the answer is shown as a footnote, but the user is redirected to the main page of the site in question, and not to the page with the text.

Sources and references: https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/ https://habr.com/ru/news/t/715508/

Tagged , ,

Another novelty in the field of artificial intelligence

Reading Time: 2 minutes

Microsoft has introduced the artificial intelligence VALLE, which is able to simulate any human voice based on an example lasting only three seconds. At the same time, the voice is imitated very reliably, preserving both the timbre and the emotional coloring of the original.

Unlike other text conversion methods, which often synthesize speech by manipulating waveforms, Microsoft’s development mainly analyzes exactly how a person sounds, breaks this information into separate “tokens” and uses training data to compare their “knowledge” about how this voice will sound if the AI pronounces other phrases. VALL-E is trained on recordings of 60 thousand hours of conversations of more than 7 thousand speakers, which is about 100 times more than in existing systems.

The most interesting thing is that it takes VALL-E only a few seconds to clone a voice, and besides, the similarity of emotional timbre and background noise. Moreover, the fact that the model is still quite new, but already has a huge advantage over others. And, of course, further improvements are expected to lead to even more human-like speech.

Google showed its AI Duplex, which can also speak almost indistinguishably from a human, back in 2018, but the essence of Microsoft’s development is not in the AI itself, namely in its ability to imitate different voices.

As in the case with all the other different AI models there is concern about the misuse of Vall-E. For example, to imitate the voices of public figures, politicians, or stars. Criminals will also be able to obtain confidential data if they make a person believe that he is talking to family, friends, or officials. Some security systems also use voice identification.

However, VALL-E developers and researchers claim that such risks can be reduced using a different model. It is possible to develop a model that will be able to determine whether the audio was generated by the VALL-E or not. However, how it will look at the moment of a telephone conversation, for example, is not entirely clear.

But so far this development is not open for public use and we ourselves cannot try out VALL-E. We cannot make sure how well the model imitates intonation and changes in the emotionality of speech, and how quickly everything works. But at the same time we have the opportunity to listen to the already generated text from VALL-E.

Sources:

https://www.business-standard.com/article/technology/microsoft-s-new-ai-tool-vall-e-can-replicate-any-voice-in-just-3-seconds-123011000696_1.html

https://valle-demo.github.io/

Robots from 2023

Reading Time: 2 minutes

Robots in our time are no longer considered something supernova and amazing. However, all the same, progress in their development is taking place, and quite large-scale. Previously. These were stupid machines that were controlled by people and had a very narrow range of capabilities.

Fortunately, or not, over the past 5-10 tears, the field of robotics has advanced a lot. And the company Nvidia has made a significant contribution to the development. At the International Consumer Electronics show this year, all the top robots had Nvidia brains. And all such robots are created for completely different areas of our lives.

For example, a baby stroller GluxKind “Ella” with artificial intelligence. It can rock the baby on its own, help with descents and ascents uphill, and has many sensors to monitor the environment. Such sensors allow the stroller to move independently, so if there is no child in the stroller, then you do not have to carry it, since it can follow you by itself, while avoiding obstacles. And of course, the stroller can be connected to the phone and tracked using GPS.

A delivery robot from Neubility was also introduced. This unmanned robot has joined the ranks of delivery robots. It was developed larger and more reliable than many of those that are already in the market. Since many delivery robots are hacked or used for other purposes. In theory, on-board cameras should record those who harm the robot, but it is unclear hoe long it will take until people leave it alone. So, the company is considering more secure options for the robot to work. For instance, golf courses, hospitals, resorts. In my opinion, it is really difficult to lead such delivery robots into real life due to the fact that there are a lot of external unpredictable factors. However, in hospitals and in some quite protected places, such innovations can be very useful.

Another interesting development was presented. Control tower for autonomous parking.

Seoul Robotics has taken a different path towards the commercialization of autonomous vehicles. Instead of developing and embedding the entire autonomous driving system, including sensors, into the vehicle, Seoul is turning to the surrounding infrastructure.

Seoul Robotics claims that its LV5 CTRL TWR helps provide information about the environment and selects the safest path for the vehicle. This approach can be much cheaper than introducing this technology into every car. It could also help provide fewer catastrophic problems and allow older cars to interact better with newer autonomous vehicles, while providing a viable low-cost upgrade for those who wanted the latest cars that are not currently autonomous to work as if they were.

Sources:

https://www.zdnet.com/article/best-robots-ai-ces-2023/

https://www.zdnet.com/article/best-robots-ai-ces-2023/

Tagged

World’s first intelligent aquarium

Reading Time: 4 minutes

We all love to observe and learn new about different things and organisms that inhabit our planet. We do this in a variety of ways, and if we talk about the study of marine organisms, one of the ways is to visit the oceanarium. Usually, near the aquarium there are signs on which basic information about the fish living in this aquarium is written. In a public aquarium full of marine life, it is often difficult to distinguish which species are represented in it. Especially often there is a situation when a child asks his parents about some fish, but they cannot clearly tell and explain all the information to the child. It happens that adults themselves are simply too lazy to read the entire flow of information. Developed by specialists from the Research Institute of Technology in Taiwan AI Aquarium helps to solve all these problems. Artificial intelligence thanks to the tracking of the eyes of observers, the position of fish in the tank and the algorithm for recognizing objects gives out the necessary information and makes the process of observing marine life not only spectacular, but also informative.

The new AI Aquarium is an almost ordinary aquarium with an interactive transparent screen on the outer edge and two cameras. 3D camera aimed at the audience, tracks the movement of the observer’s eyes and determines with high accuracy where his gaze is directed. The second camera is directed inside the tank and covers all its inhabitants. It allows to recognize the location of each fish at any given time. This is possible because of an algorithm that recognizes the object and analyzes the video information from the camera in real time. Next, the algorithm compares the appearance of the animals with their pictures from the database. By comparing the direction of the user’s gaze and the location of the fish, AI Aquarium determines which one a person is looking at. In addition, the system identifies which species the animal belongs to and displays the relevant information on a transparent interactive microLED display. There is also a function to reproduce information in audio format.  According to the Research Institute, the accuracy of the system for recognizing the species and location of fish is very high and is 98%.

The designation of the species of fish appears on the screen in the form of a signature, which is placed directly below the animal in the field of view of a person. If viewers want to know more detailed information about the fish, they can request it with simple hand gestures. The system attributes to each species different gestures that can be seen at the bottom of the screen. At the moment, the first AI Aquarium is available at the National Museum of Marine Science and Technology in Taiwan. At CES 2023, the Industrial Technology Research Institute was awarded the title of one of the leading innovators of the year. AI Aquarium was noted as one of the most striking and creative achievements in the category of “virtual reality”.

I believe that such an AI Aquarium is a step towards a new perception of the tourist business and services. Perhaps in the near future, with the help of artificial intelligence, this area, like many others, will face global changes and access to a new level in the provision of services. I am sure that such an aquarium will become a trend of the present time, as it displays the effect of immersion and the possibility of new interaction between people and exhibits.

In my opinion, artificial intelligence and all the innovations possible thanks to it exist precisely in order to make human life better, to develop us, to open new horizons for us and to expand our boundaries of perception of the world. Artificial intelligence should serve a person. And that’s what this invention is. This aquarium helps to increase the public’s interest in studying the animal world, including the marine.

I’m strongly convinced that the technologies that have been used in the new smart aquarium in the future can be used in other important areas of life, for example, in the field of education or medicine.

In conclusion, it is worth noting that artificial intelligence is currently penetrating absolutely into all spheres of human life. On the example of such an AI Aquarium, you can see that even the most ordinary things can be considered from a different angle and can become an innovation and something non-standard. The main thing is the ability to see the possibility and prospect of combining simple things and artificial intelligence.

Links:

ITRI Named a CES 2023 Innovation Awards Honoree-Latest News-Media Center-Industrial Technology Research Institute

5 futuristic technologies from CES 2023 (interestingengineering.com)

AI Aquarium use eye tracking technology to be the world – Inavate (inavateonthenet.net)

Tagged

Will AI replace artists, writers and musicians?

Reading Time: 3 minutes

In 2022, much attention has been focused on neural networks. While some people see benefits and advantages in such technology, others are wary of, and sometimes hostile to neural networks. At their core, neural networks are machine learning algorithms that “simulate” the workings of the human brain. Thus, in the past year, neural networks have been able to write sentences and music, generate images, and even diagnose diseases. But will algorithms ever be able to replace human occupations such as writers, musicians, artists, and designers?

 Neural networks have significant drawbacks. Throughout history, people have used emotions as a survival mechanism. For example, fear helps protect themselves from threat. A person can recognize the emotions of others by body movements, tone of voice, situation, social signals. Communicating with others and mastering cultural norms, we learned to understand emotions. Unlike us humans, artificial intelligence cannot feel, and it has not lived for many years among people. It  is very difficult for it to understand emotional subtleties.  It doesn’t create original content.  But in order for the neural network to produce text, pictures or music, it is required to download huge amounts of data with information from different sources.  I am sure that a person has much more advantages, because we have been processing information for decades.  Where there are mathematical rules, one can get around (chess, for example), but in creative matters, neural networks lack abstract thinking and can be useful to people as assistants for your creativity and generators of new ideas.

Now, let’s talk about the advantages of neural networks.  The neural network is well able to personalize content, work with many variables at once and create variations. Also, they perfectly process hundreds of data elements at the same time, which entails saving time.  Once the AI recognizes the pattern, it can instantly generate several variations.

In general, I can say that both a person and neural networks at the moment have something to learn from each other. But still, will artificial intelligence be able to replace creative professions?

First, it can affect those who call themselves artists and musicians, but who repeat the styles and creative approaches of others. Those who are accustomed to repeating the same tasks every day, writing the same sentences and replacing just a few words. All those who engage in plagiarism instead of creating something new will lose their jobs.

I am of the opinion that neural networks, at least for now, cannot come up with radically new ideas, but they can speed up the kind of work that is now routine. I think neural nets are a great tool, but being an artist or designer is not something that can be replaced by AI. There is one condition, and that is that it has to be someone who is studying alongside modern technology. It’s no secret that many people are now looking to neural networks for inspiration for their work. When designers are experiencing “burnout”, neural networks throw up new ideas for development. Simply send text to input your request. From my perspective, with all the opportunities that AI gives to a person, it should be perceived not as a threat, but as an opportunity. After all, while neural networks can give out ideas for inspiration, people can create something new, completely unique.  What’s more, people can concentrate on something more important and think how they can change the world to the better side with the help of neural nets. The condition of constant complaining, and fear is nowhere near development.

In conclusion, I would like to say that neural networks will not completely replace artists, designers, and musicians in the near future. Technology is evolving rapidly, but it is still limited in its ability to understand and replicate human creativity and emotion. Neural nets can help and improve performance, but they will never fully replace humans because they lack the unique human touch and perspective required for all manifestations of art.

Links:

https://medium.com/codex/future-of-graphic-design-artificial-intelligence-and-machine-a05332921014

https://futurism.com/a-new-ai-can-write-music-as-well-as-a-human-composer

https://hi-news.ru/eto-interesno/mozhet-li-nejroset-zamenit-xudozhnikov-pisatelej-i-programmistov.html?ysclid=ldso70yf2a847515369#i-3

https://petapixel.com/2023/02/05/will-ai-destroy-the-professional-headshot-industry/

Tagged ,