Calculations show that it is impossible to control a super intelligent AI

The idea that artificial intelligence is overthrowing humanity has been talked about for decades, and scientists have just expressed their judgment as to whether we would be able to control a high-level computer superintelligence. The answer? Almost certainly not.

The catch is that mastering a super intelligence that goes far beyond human comprehension requires a simulation of that super intelligence that we can analyze. But if we cannot comprehend it, it is impossible to make such a simulation.

Rules like “ no harm to humans ” cannot be enacted if we don’t understand the kinds of scenarios an AI is going to come up with, the authors of the new paper suggest. Once a computer system is operating at a level beyond the range of our programmers, we can no longer set limits.

“A super intelligence is a fundamentally different problem from those typically studied under the banner of ‘robot ethics,'” the researchers write.

“This is because a super intelligence is versatile and therefore has the potential to mobilize a diversity of resources to achieve goals that may be incomprehensible to humans, let alone controllable.”

Part of the team’s reasoning stems from the stopping problem raised by Alan Turing in 1936. The problem revolves around knowing whether or not a computer program will come to a conclusion and answer (so it stops), or just forever trying to find one.

As Turing proved through clever math, although we can know that for some specific programs it is logically impossible to find a way that we can know that for every potential program that could ever be written. Which brings us back to AI, which in a super intelligent state could hold every possible computer program in its memory at the same time.

For example, any program written to prevent AI from harming people and destroying the world can come to a conclusion (and stop) or not – it’s mathematically impossible for us to be absolutely sure about anyway, which means it is not manageable.

“In effect, this renders the containment algorithm useless,” said computer scientist Iyad Rahwan of the Max-Planck Institute for Human Development in Germany.

The alternative to teaching AI ethics and telling it not to destroy the world – something no algorithm can be absolutely certain to do, the researchers say – is to limit the capabilities of super intelligence. For example, it can be cut off from parts of the Internet or from certain networks.

The new study also rejects this idea, suggesting that it would limit the reach of the artificial intelligence – the argument is that if we’re not going to use it to solve problems beyond the reach of humans, then why create it at all?

If we continue with artificial intelligence, we may not even know when a super intelligence is arriving beyond our control, such is its incomprehensibility. That means we have to ask serious questions about the directions we are heading.

“A super-intelligent machine that controls the world sounds like science fiction,” says computer scientist Manuel Cebrian of the Max-Planck Institute for Human Development. “But there are already machines out there that perform certain important tasks on their own without programmers fully understanding how they learned it.”

“The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

The research is published in the Journal of Artificial Intelligence Research.

.Source