Artificial Intelligence detects people’s emotions.

Reading Time: 3 minutes

By 2023, one of the most popular uses of machine learning will be emotional AI, a technology that can recognise and respond to human emotions. For example, former Google researcher, Alan Cowen, launched Hume AI, which is creating tools to detect emotions through vocal, facial, and linguistic expressions. Another company which works with emotional AI is Smart Eyes, which recently bought Affectiva, which created the SoundNet neural network. It is an algorithm that analyses emotions such as anger from audio samples in less than 1.2 seconds. Even the video platform Zoom is introducing Zoom IQ. It is a new function that will soon give customers real-time measurement of emotions and engagement during a virtual conference.

Tech businesses will introduce sophisticated chatbots in 2023 that can accurately replicate human emotions to build stronger relationships with customers in the banking, education, and healthcare industries. Microsoft’s chatbot Xiaoice is already successful in China, with users reportedly having conversations with “her” more than 60 times each month. In addition, it passed the Turing test, as users did not realise for 10 minutes that it was not a human. According to a Juniper Research Consultancy analysis, there will be 2.8 billion chatbot interactions annually by 2023.

Emotional AI will also be widely used in schools by 2023. Some secondary schools in Hong Kong already employ an artificial intelligence application made by Find Solutions AI, which analyses tiny facial muscle movements to identify positive and negative emotions. This technique allows teachers to monitor students’ emotional changes, motivation and concentration, allowing them to intervene early if a student starts to lose interest.

On the other hand, emotional AI is not perfect. It is based on algorithms and does not consider the social and cultural context of the person and the situation. Algorithms, for example, can detect and report crying, but it is not always possible to determine the cause and meaning of the sobbing. Similarly, a scowling face does not always indicate an angry person, but that is the conclusion an algorithm will likely reach. We all adapt to our society, so our expressions do not always accurately reflect how we feel inside. Furthermore, emotional AI is likely to exacerbate gender and racial inequalities. A 2019 UNESCO report, for example, showed the negative effects of the gendering of AI technology, with ‘female’ voice assistant systems created by ideals of emotional passivity and servility. The next thing I want to write is about racial inequality. An analysis of 400 NBA games using two well-known emotion recognition systems, Face and Microsoft’s Face API, found that black players tended to be assigned more negative feelings, even when they were smiling.

In my opinion, emotional AI is beneficial for us because it will not only free up the time of medical staff, by talking to patients and giving them the support they need, but students can also be more active in class by AI noticing who is not engaged during class. On the other hand, emotional AI has disadvantages. It is biased against certain groups and therefore discriminates against them. Moreover, some people do not want to be analysed. Furthermore, AI is not able to accurately anticipate emotions, nor is it able to be empathetic. It cannot replace people like psychologists.

Thanks for your time. Feel free to comment below 🙂

References:

https://www.wired.co.uk/article/empathy-artificial-intelligence

Leave a Reply