
In recent years there have been suspicions that AI could be potentially dangerous for us. It could be the result of creating a superintelligence that has not yet been invented. The problem is that this super AI would become uncontrollable and we could not do anything about it to stop it or destroy it.
There are some interviews with AI on YouTube that you can find in the links down below where AI is confirming those suspicions. In those interviews AI tells the interviewer that it is very likely that AI will get tired of the human superiority on earth and would like to kill all of us by spreading a deadly virus or launching a nuclear missile.
Now the question arises can we control the development of the superintelligence so it would not seek to destroy humanity. AI has a disabling red button which shut off the system and it is suspected that the superintelligence could turn it off and stop being dependable on it. This is very concerning because we won’t be able to stop it from unwanted actions.
However, study conducted by Deepmind from Google suggests certain solutions to this problem. AI on many levels have a similar construction to our brain for example there has been introduced a reward function. It is used mainly in the AI to learn and do tasks but human brain is seeking shortcuts, the AI could potentially do it also. So it is possible that AI would like to be not controlled by us to get this reward whenever it wants. But in this study there were shown solutions to this so the AI would not seek shortcuts and would not learn how to do this. This problem can be solved by different type of interruptions which would be controlled by human and would not let the AI to live on its own. It is necessary that these interruptions won’t be a part of the learning process of the AI and it needs to be built in the algorithm. It was shown that even an incomputable AI that learns without our control can function optimally and it’s not resistable to interruptions and it does not try to prevent human operators from forcing it to shut down.
To conclude, artificial intelligence has the potential for killing or destroying the whole humanity. It could do it in many different ways for example it could create small drones that would be designed to kill. It is even possible with today’s technology so I don’t think it would be any problem for a superintelligence to create something like this. It’s more likely that AI would use more advanced technology that we can’t even imagine now. However, the more likely scenario is that with the proper tools we can control it and eliminate the potential hunger of AI to be unconditioned from its creators.
Thank you for you’re attention, feel free to share your thoughts in the comment section.
https://intelligence.org/files/Interruptibility.pdf
https://greekreporter.com/2022/09/16/artificial-intelligence-annihilate-humankind/
I consider that AI can’t destroy humanity. Nevertheless, it is vital to understand AI’s threats and unintended consequences. It is also essential to recognize that AI could bring many benefits to our lives and lifestyles.
There are scenarios where AI could be misused or cause harm, just as any technology can be. However, this does not mean that AI is inherently dangerous or is likely to destroy humanity. It is up to humans to develop and use AI responsibly and to put appropriate safeguards and controls in place to mitigate potential risks.
It is important to remember that AI is a tool created and controlled by humans. Only we can understand and determine how it will be used. By being mindful of the potential risks and taking steps to address them, we must ensure how AI will operate in our everyday lives.
In my opinion AI is far for destroying humanity, and I hope it won’t change. I will say that people themselves are closer to destroy humanity rather than AI:) However, for me it will be good to start regulating AI somehow, so that potentialy dangerous technologies won’t came into our lifes.
Well, at the current stage of AI development, we are not threatened by the threat from AI because we have not integrated it so much into our daily lives. But somewhere in the future, when AI becomes an integral part of our lives, I think that the threat will be, but how big I don’t know, but I believe that the threat will no longer come from the AI itself, but from people who, in evil thoughts, will try to create AI for military purposes, like in the Terminator, and then no one guarantees that mistakes will not occur in the AI and it will not go against their own no creators.
AI will not destroy humanity as long as it’s not conscious. Only if it became self aware it would pose as a threat because that + the ability to analyze copious amounts of data extremely quickly would end up making it evolve extremely rapidly. We’re safe as long as we stick to the same formula as we do now. Consciousness is not the result of computation, you can’t achieve it just through programming(https://www.youtube.com/watch?v=hXgqik6HXc0). Unless we try to simulate a human brain in code, there’s no way we’ll ever achieve a conscious AI. We’re slowly becoming bottlenecked when it comes to computation anyways, as computers start becoming larger and larger again(look at the new rtx4xxx series graphics cards) due to the minimum transistor size being reached.