The statement that facial recognition technology is controversial will probably come as no surprise. We are concerned about the possibilities of this technology and its applications. No wonder, since in the wrong hands it can even become a tool of mass surveillance and human rights violations (here I mean its particularly alarming use in China on Uyghur minorities).
The Clearview AI company created by Hoan Ton-That deserves a mention- their application also seems controversial in the context of possible applications. It used to be available for private customers. How their software works? Basically, you can (at this moment not personally you, because it became only available for customers associated with law enforcement or some other federal, state, or local government department, office, or agency) upload a photo of a person and it will provide you with more public photos of that person and links to where this photos appeared. Clearview AI has a database of more than 3 billion pictures that were scraped from millions of websites (like Facebook, YouTube, Venmo). When the photo is uploaded, it is compared to the database and voilà- the result appears. Simple, elegant and with high potential of being dangerous. Just to provide you with a comparison of its search power to different “recognizing” tools:
It is used mainly by the law enforcement departments and governments ( in USA especially). Those experiments prove to be very effective. For example, the Indiana State Police solved a case within 20 minutes of using the app. A fight of two men which ended with firing a gun has been recorded by a bystander. The police uploaded this image into the app and got a match very quickly. The gunman’s face appeared in a video posted on social media and his name was included in the caption. As he was not in government databases it would be quite difficult to identify without the help of this app. I would like to emphasize once again, that data is downloaded from platforms, including social media (which, yes, violates their terms of service). So any photo posted by anyone can go to the base…
Clearview does not make their service publicly available but another company might. If this happens, according to NY Times, someone walking on the street could be immediately identified by just anyone (and also his address etc).
So how to protect yourself from this now that we all exist in virtual reality? Is there any way to outsmart these kinds of applications? Generated Media seems to have come up with a answer to this problem. They created a tool called Anonymizer. How it works? We upload a real photo of our face and the tool provides us with variety of fake faces that are similar to ours. Similar, yet if any facial recognition company will add the face to its database, finding the real you by using the fake image will be , according to the company, quite impossible. According to Generated media, to achieve higher security of your anonymity, you should change the fakes as often as possible.
What is the technology behind this solution? The company uses generative adversarial networks (GANs). They were first designed by Ian Goodfellow in 2014. It works by putting two neural networks against each other in a form of zero-sum game. So, we have a generative network which goal is to create a fake image good enough to deceive the discriminative network. Of course, the discriminative networks goal is to avoid getting tricked. In order to do that, it has to correctly guess which images are artificial images and which are original data. So put simply, the generative network creates new classes of data and the discriminative network checks their authenticity. This game has many thousands of rounds. Networks check their moves and use backpropagation to keep improving. This technology already works on a large scale. In less than a year, Generated Media created more than 2 mln fake images. The company uses them as training data. Anonymizer works by analyzing the features of persons face on uploaded photo and finding about 20 similar fake ones within Generated Media database.
Of course, we cannot view this technology solely in the context of protecting ourselves against facial recognition systems. Like most technological solutions, it can also be a threat if it falls into the wrong hands. I mean here for example Deep Fakes. Also, what is interesting, the technology on which it is based, meaning GANs, aroused much controversy among SAG-AFTRA (Screen Actors Guild – American Federation of Television and Radio Artists)- they even took legal action. In their opinion, GANs are a threat for actors because production companies might decide to create GANs-enabled holograms instead of hiring real actors. Back to Anonymizer; as of course the intentions of the person using this technology cannot be prematurely identified, in case they are negative this is actually not such a convenient tool. Why? Because such users could still be tracked back to this company and thus identified.
To conclude, Anonymizer can be both fun and a potentially powerful tool. Powerful in meaning of protecting ourselves from not only mass surveillance applications but also from being targeted for stalking. I would also like to stress the positive context of GANs use. They can be used anywhere where we deal with a visual pattern, even in modelling dark matter in astronomy. We have to keep in mind that they were originally developed to train large sets of data. To my mind, Anonymizer is an extremely interesting tool. It can definitely be described as a useful way of showing the utility of synthetic media.
https://generated.photos/anonymizer
https://www.youtube.com/watch?v=73TsoVqnefI&feature=emb_title
https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
https://en.wikipedia.org/wiki/Generative_adversarial_network
https://www.trendhunter.com/trends/anonymizer
https://medium.com/swlh/this-is-not-a-person-but-she-is-a-threat-6d6f2d4083f4

