Emotion Recognition

It’s fascinating to see the speed of progress in AI while improving our lives significantly.  There have been some major breakthroughs in the healthcare sector, travel sector, etc. But what is most noticeable is the increase of usage of AI for surveillance.

Certainly, there are benefits from using AI for surveillance, like an ambulance to be able to locate someone in case of an emergency or even retrace your steps (say you got food poisoning).It also could be used to track or keep an eye on a certain individual who’s been on a short leash with the law. The list could go on and on. But the main concern is how it is going to be used by certain governments and private groups.

Within a year NtechLab (a facial recognition company) is planning on deploying a fully operational surveillance AI “aggression detection”.  The purpose of this AI is detecting whether an individual leans towards committing a crime or not.

While this product could be beneficial in some way or the other, the main concern lies on where and how it is going to be used. Recently, the Russian Direct Investment Fund and a mysterious Middle East partner injected an additional 15 million dollars into this program. These parties being involved is highly troublesome considering the actions taken by them in the past couple years and their stand on democracy. Multiple case studies had been made on similar products, and the results were worrisome.

I would like to say we should be worried, but it is too late. I guess all we can do now is hope that certain regulatory actions well be implemented while moving into the future.

Link: https://www.forbes.com/sites/thomasbrewster/2020/09/22/this-russian-facial-recognition-startup-plans-to-take-its-aggression-detection-tech-global-with-15-million-backing-from-sovereign-wealth-funds/#3b9684914b9e

The following two tabs change content below.
Avatar

Clefos Maxim

Avatar

Latest posts by Clefos Maxim (see all)

7 thoughts on “Emotion Recognition

  1. Avatar Morozov Mark says:

    Very useful technology. Even now, it can be effectively used during the quarantine. In this way, you can determine the sick person and his movement. However, will it work if the person is wearing a mask?

  2. Avatar Karhol Oleksandr says:

    I consider this technology to be rather useful than threatening. For instance, it might help companies to automate the process of controlling whether their employees’ behaviour and emotions are acceptable while talking to a customer.

    • Avatar Clefos Maxim says:

      Right. And I totally agree with you.
      All tho the example you gave seems a lot like micromanaging and that is not necessarily an environment where your employees would be happy to work in.
      But the issue at hand is not whether it is useful or not.

  3. Avatar Sekuła Maksymilian says:

    As you said, we should be worried. It’s a scary picture of the future. Machines controlling our human emotions, absoloute invigilation of our privacy and robots deciding whether we are criminals. In a far distant future( when there will be far too many people on earth, overpopulation is already our problem) maybe mechanical AI judges will work in courts. Analysing wheter we are guilty by scaning the look on our face. Scary because machines can always judge wrong( humans too but not based on a software mulfunction) or they can be hacked. Strange times coming

    • Avatar Clefos Maxim says:

      Very true. This emotion recognition software in a way is acting as a judge, except that it doesn’t give you a sentence. Just to add to the post, I would say another thing we could hope for is the integrity of whoever will be using this type of AI.

  4. Avatar Rachańczyk Martyna says:

    Great post!
    Considering the fact that the technology learns by analysing and comparing faces of convicts from databases (which were collected over the last few decades), we should keep in mind that this technology is likely to be racially biased.
    It’s already been proven that algorithms can be racist, of course not necessary for the same reasons as humans…
    Due to the harsh history of Black people, including slavery and long fight over their freedom and rights, they’ve been more likely to be arrested for minor crimes and therefore added to policy databases. The system more often determined them as potential aggressors.
    The facial recognition technology is breakthrough, but I also see a fine line between an inaccurate system and the system which targets particular groups of people merely for not being white.

    • Avatar Clefos Maxim says:

      Thank you! Much appreciated 🙂

      Great comment!

      Correct me if I’m wrong, but isn’t this technology being bias the reason why big tech companies in the US are not rushing into it. Unless there is something kept from the public eye?

Leave a Reply