Almost one year passed since prominent personalities, such as Stephen Hawking, Elon Musk and Bill Gates raised their concerns regards threats that AI brings to our lives in the nearest future and at the very beginning of 2016 online medias already seem to be flooded with all sorts of AI news which promise to be the year’s trend.
In the post I’d like to consider if AI is a real menace that can wipe us out within 20 years perspective or it’s about to give us the future our ancestors would only dream of.
First, let’s consider what Artificial Intelligence is and what goods it is able to offer us nowadays:
- Some academics describe AI as “the science and engineering of making intelligent machines”, but since it sounds too nerdy and boring I would rather say it is an imitation of a human intelligence which is able to handle dozens of problems the way people do but with a higher speed and success level. To add, it is still a programming but unlike writing text in word or playing a movie it changes humans and does their job in turn.
- To name the first good of AI, science and medicine are said to be first in the row where it is applied. Thus, AI is at work in hospitals helping physicians understand which patients are at highest risk for complications, and AI algorithms are helping to find important needles in massive data haystacks. For example, AI methods have been employed recently to discover subtle interactions between medications that put patients at risk for serious side effects.
- The second good is GPS and autonomous transport (which relies on it). We plan trips using GPS systems that rely on AI to cut through the complexity of millions of routes to find the best one to take.
- Personal assistants, such as Siri and Cortana embedded into our phones and computers that give us not only helpful vocal hints but also detect faces and recognize people.
- And the last example in this list is search engines which seem to know us better than we do, when predicting our intentions.
But as it always happens, after the rain of prayers, critics do spring up like mushrooms in a forest. Those point out at potential threats of novelties and like predicting bad scenarios. Amid well-known T-1000 with a machine gun destroying everything and everyone, such predictions sound alerting. Currently, they say these facts serve as points of concerns:

- Artificial Intelligence, is a computer program first of all and only then – intelligence. And like any other code it is exposed to break down sometimes, either giving the wrong outcome or lagging. It is fine with Windows, and everyone is used to it (yes, windows is a crap), but stakes rise significantly when it comes to serious tasks.
- Being undividable from the rest of the computer network, there are people who will try to challenge AI, thus break into its’ nature and force it to do something it is not supposed to.
- Bad instructions. As said before, high stakes bear serious threat if AI works improperly, thus like any other instrument it has not only to be free of bugs but also know its limits, e.g. have proper calibration. For instance, if a driverless car is to be asked to deliver a passenger to an airport as fast as possible, will AI consider it an urge to put the pedal to the metal and drive at 150km while running over pedestrians?
- And the last one, which is considered as the main threat by the mentioned personalities at the very beginning of the article, is when AI starts thinking far beyond of what it is supposed to think of. Once it gets really autonomous it might become a super brain, knowing everything better than people do, and who knows consider people as bugs in its own code or even bugs of the Earth who prevent it from a proper functioning?
As the conclusion, I need to say that AI doomsday seems to be more of a film scenario than a real life example, however, greater attention and control should be paid and imposed upon the way AI evolves, especially amid the level of interest towards it from all the parties: business, politicians, IT-ninjas and even regular people from the crowd, willing to master it.
there is no need to be afraid of AI, just like with the computer. 99% of the problems sit in front of the computer. The question here is rather, what are our capabilities in creating functional and save to use robots, rather than dysfunctional killing machines. Unless the aim of the robot is to actually kill people, since this seems to be a trend as well in weapon manufacturing. AI still depends on data and it is the data, just like human experience, that make the person.
Just on a side note, this can happen if a robot is not programmed correctly or if we underestimate its intelligence
http://edition.cnn.com/2015/07/02/europe/germany-volkswagen-robot-kills-worker/
Like the Futurama reference and well written article 🙂 I think everything comes with its own pros and cons. If we are too afraid to explore the opportunities lying in front of us, we will never succeed to break the existing limitations. A small remark on the 4th point about robots thinking way beyond the limit – I don’t think AI can ever develop individual consciousness, hence artificial. Hence they won’t be able to make the differentiation between humans and AI. Unless they are programmed to kill the humanity, they won’t do it themselves. We can present the same argument to apply on a general scale. Humans are a threat to themselves and everything and everyone around them, they have always been! We were never able to predict World wars and could not stop it, could we? Que sera sera