
According to a study by Polish doctor Michał Kosiński, artificial intelligence, using facial features, is able to recognize even sexual orientation or political views. To check this, a special algorithm must be used. The algorithm which was invented by a doctor from Poland is able to read this information even from a photo which you have on your profile. The algorithm first learned to recognize photos of people who self-identified as liberals or conservatives. He then began the analysis himself. From more than a million photos, AI correctly identified the political views of up to 72 percent of people. This is more than the analysis done by humans (55 percent). and more than a random choice (50 percent). The algorithm proved even more effective than a questionnaire with 100 questions about political views – here the effectiveness was 63 percent.
Examining political preferences based on appearance is not Dr. Kosinski’s first such study. In 2017, he and his college colleague Dr. Yilun Wangiem presented research proving that an algorithm can successfully determine someone’s sexual orientation with 91 percent accuracy. for men and 83 for women based on just five photos. The report on the study’s findings caused quite a controversy, with many scientists even calling it “nonsense”. Meanwhile, the effectiveness of algorithms to analyze human faces is getting better and better. On the one hand, it is extremely helpful in searching for missing persons, detecting the perpetrators of crimes, and on the other, it creates a huge field for abuse and manipulation for representatives of political power. The Stanford University researcher’s study was not conducted on city surveillance images, but on social media. Indeed, the algorithm is based on a collection of more than 1 million photographs of citizens of the United States, Canada and the United Kingdom obtained from Facebook, Instagram and other sites. The algorithm was tasked with assigning each photo to one of two political groups: liberals or conservatives, omitting characteristics such as pay, skin color, or age.
However, it is unclear how the algorithm would behave if it were asked to define the political views of random people picked up from the crowd. Besides, most people’s political preferences and views are more complex than a simple liberal-conservative system. It’s unclear how AI would behave if asked for more detailed analysis. It is likely that over time the algorithm may be refined and will be able to read more from our appearance and our faces. This, Dr. Kosinski admits, can be dangerous if used improperly. Facial recognition systems used in China or the US are already able to effectively monitor citizens. In the United States, social activists work to limit government interference in the privacy of citizens and call for monitoring to be kept to a minimum. However, in China the authorities, using a city monitoring system, are able not only to check whether the citizens properly sort their rubbish, but also, thanks to a facial recognition system, can do shopping without cash, identify themselves in a bank, and recently it turned out that Beijing is using this system to supervise the ethnic minority of Uighurs. During rallies and protests, the faces of participants are recorded, data is saved and used when necessary. Not surprisingly, the most radical social activists are calling for a ban on this technology and the elimination of urban monitoring systems. However, Dr. Michal Kosinski emphasizes that the problem is not the algorithm or the system, but the people and how they use these systems. Dr. Kosinski, coordinator of the myPersonality project, is one of the creators of an algorithm that accurately describes the personality model of each social media user based on their online activity. This algorithm was used by Cambridge Analytica, which obtained the data of nearly 90 million people without their consent and used it to create a tool to influence voter preferences, and thus the outcome of elections. Probably because of this company, Donald Trump won the 2016 election. Kosinski explains that he is not to blame for the Cambridge Analytica scandal -. It’s not my fault. I didn’t build the bomb. I only showed that it exists,” he said in an interview with the Swiss “Das Magazyn”. In an interview with the daily Neue Zurcher Zeitung, he argues that Facebook is “fantastic for democracy,” but that we must accept that it’s time to say goodbye to our privacy and anonymity, which is actually unique in human history. For millennia, people lived in small groups, everyone knew everything about each other, and it was only with the rise of cities, large agglomerations, that people became anonymous and gained what we today define as privacy. According to Kosiński, people will not escape from technology, they must learn to live with it so that it harms them as little as possible. There is no way back, we will not become nomads and gatherers again,” argues the scientist. With the loss of anonymity, we have gained access to new services and products based on digital technologies. And this is the future of humanity.