Technological threats

Reading Time: 3 minutes

SHARE FACEBOOK LIKE, WE WILL HAVE ACCESS TO YOUR LIFE!

We are aware of new technologies. We know the purpose of creating AI. We all know the fields where it is used or where it can be improved even further. The main question is: How is it working?

The unspoken fact is that developers actually do not know what they are developing. We do not follow how it is learning, we only care about the results at this point. The AI is spreading pretty fast, but should we accept this fact and stick to something we can not really understand at this point? Yeah, I have heard about the accuracy of 80% and higher of positive solutions made by computers. With less effort and much less time wasted. That is not the point though. We are not living in numbers. The amount of data stored and analysed is uncountable. At the end of the day, it does not matter for us how many times machine learning will get something right. More important is what type of mistake we are considering. What will be its consequences and the number of people involved?

If we will be able to program things like us, what if we were programmed or we can be programmable?” ~Kristian Hammond

Actually, we should be terrified. Only way to defend AI development, as it is right now, is through accepting all science-fiction movies and the theories of us destroyed by some “Greater Mind”. We are aware of our cognition. It is easy for us to research a new topic with understanding of the path we followed. What is the way to describe AI cognition? How it is improving new skills? We are basing on such things as:

  • Emotions
  • Creativity
  • Humour
  • Consciousness
  • Intuition

Machine learning cannot take those elements to consider. There is no way to implicate it. We can share our experience by uploading and feeding programs with more data. Our own intellectual laziness makes us blind on the possible tragedy. What makes us believe that the algorithms we created will work properly? Look at the self-driving cars. Car is on the road. Legal speed, everything normal. Then a kid on the bike suddenly appears from the left side of the cars traffic pass, just before it. From the right, there is an older woman going straight under the wheels. Too late for breaks. What is the decision to make? Within those options: turn left- hit the kid, turn right- hit the woman, go straight and possibly make them both suffer. It is a moral dilemma. Personally, I do not know and will not ever make such a decision. It would be an impulse I suppose. That’s all. Life is life and we have no rights to choose between one or another. Let’s follow the process of self-driving car decision making. Which option is better? Who has more chance of surviving? Who is responsible? I would love to see the explanation of that kind decision making.

Autonomous weapons, social manipulation, invasion of privacy, discrimination. We can not let AI dominate these fields. One word connecting all globally successful people- INTUITION. Scientists, businessmen, trend influencers, all of them followed “the gut” which allows them to improve. How to define intuition? Where does it come from? How to implicate it into AI? We are not conscious about the source of it, so we can not program it what so ever. The change FBI’s Cyber Security Department statement is summing this up perfectly, from- “There are two types of organisations: those who suffered a cyber-attack, and those who didn’t yet” to- “There are two types of organisations: those who suffered a cyber-attack, and those who don’t know yet”.

We all know AI will kill many workplaces in a while. In my opinion, it will create almost the same amount. If the awareness of threats will raise we will need a huge amount of people to analyze its steps. People partnering AI will be necessary for the successful development of this technology. Manipulate questions to ask not to be manipulated. We need to lead, not follow.

 

Sources:

https://www.youtube.com/watch?v=tr9oe2TZiJw&t=985s

https://www.forbes.com/sites/bernardmarr/2018/11/19/is-artificial-intelligence-dangerous-6-ai-risks-everyone-should-know-about/#f99a24c24040

http://www.robinsonspeakers.com/speaker-peter-haas-speakers-bureau

4 thoughts on “Technological threats

  1. Miklashevich Yuliya says:

    Not long ago I’ve read a book “Origin” by Dan Brown. The main idea of this book is that apocalypse is coming as a new kind of life is developing very quickly. This “new life” is AI. Maybe one day technologies will be able to program themselves using proper codes and algorithms. Who knows. Time will show us.

    • Mazurkiewicz Mateusz says:

      That’s why most specialists in AI are afraid of apocalypse theories when the control will be taken from us by some machine. We got to be careful and follow the process of its self-learning. That is how we will be sure that it won’t go in the wrong direction.

  2. Białczyk Kuba says:

    I think that improvement of tech not only needs best engineers and scientists, but also philosphers and cignitive scientists, who will face with such dilemmas as: Whether the technology which is developed by their teams is good or bad? How is it going to affect the future? Also, they need to define, how AI should percieve humans in order to be a casual part of humans life. Loads of tech related teams work on improving computers’ understanding of humans. For instance, Natural Language Processing is no only catching words and puting it intu a file by a computer, but also understanding the syntax, semantics and morphology of a language. So I stronlgy believe that The Matrix scenario will not come true…

    • Mazurkiewicz Mateusz says:

      AI, as the name suggests, should be able to think on its own someday. I wanted to show that we are not tracking properly the way we are developing it. We are applying some systems that are not investigated properly as long as it is making good results. We shouldn’t look on the scaling, but on the details when it is failing. I think most of us heard about one of the first chatbots “Tay” which became racist and xenophobic in less than 24 hours. Of course, there was a group of people responsible for fake data, but there is always some wrong data around. In a few years with more developed AI in every area of our life, without imputing some human supervisory, we can end up in deep trouble. I can agree with You, I do not think the Matrix scenario will come true. Nevertheless without applying some good moral basics or some other kind of “the gut” it may make some tragical mistakes.

Leave a Reply

WordPress › Error

There has been a critical error on this website.

Learn more about debugging in WordPress.