The study carried out by the Max-Planck Institute for Humans and Machines and published in the Journal of Artificial Intelligence Research found that the only way that humans could actually successfully predict what something this powerful could do would to run an exact simulation of the system.
However, that might be a fruitless endeavour as the simulation may quickly become outdated if the AI is able to grow, expand and evolve itself at a rapid rate.
Manuel Cebrian, co-author of the study and leader of the research group is quoted as saying, "A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it [sic]."
The study was inspired by a question first raised by Alan Turing who wanted to know whether “containment algorithms” could be used in order to prevent a machine from hurting someone or spiralling out of control and effectively “halting” them.
The problem that exists now is that technology has advanced so much that the original algorithms would be powerless to stop the machines. Turing had concluded by 1936 that a single algorithm that could be used for all machines would be impossible and different containment algorithms would have to be developed for each individual system. Yet even that could be pointless as the computers continue to grow more intelligent.
Speaking to Business Insider, Iyad Rahwan, who worked on the study is concerned enough by the results that he feels that AIs shouldn’t be created for the sake of it but only for specific and understood purposes. He added, “the ability of modern computers to adapt using sophisticated machine learning algorithms makes it even more difficult to make assumptions about the eventual behavior of a superintelligent AI.”
Of course, this is just a study but it is always worth taking time to reflect on these types of finding as the last thing we need right now is a ‘Skynet’ situation on our hands.