The idea of artificial intelligence that has toppled humanity has been talked about for decades, and scientists have just ruled out whether we can harness high-level computer intelligence supercars. The answer? Almost certainly
The catch is control. super-intelligence Far beyond human comprehension has to be a simulation of super-intelligence Which we can analyze But if we can’t figure it out, it’s impossible to create such a simulation.
Can’t set rules like ‘It does not cause any harm to humans’ if we do not understand the situation that the AI is about to arise, suggests the authors of a new paper. When the computer system operates at a level that is higher than the scope of our programmers, we can no longer set limits.
“High-level intelligence poses a different problem than what is commonly studied under the banner of”; The Secret Service. ” ‘Ethics of robots,’ researchers write.
“This is because superintelligence It is multifaceted, and therefore it has the ability to mobilize diverse resources to achieve objectives that may be difficult for humans to understand, let alone control.
Part of the reason the team came from Alan Turing’s 1936 interruption problem, centered on knowing whether a computer program would reach conclusions and answers (and thus stop) or simply loop forever to search
As Turing proves through intelligent mathematics, as we can know that for some specific programs, it is impossible to find a way to let us know for every potential program that can be written. have That brings us back to AI, in which a highly intelligent state can store every computer program in memory at once.
Could any program written to stop AI from harming humans and destroying the world, for example, have come to a conclusion (and stopped)? – It is mathematically impossible for us to be absolutely sure by any means, which means that. No way to control
“The result was that the quarantine algorithm was inoperable,” said Iyad Rahwan, a computer scientist at the Max-Planck Institute for Human Development in Germany.
Another option for teaching AI to be ethical and saying not to destroy the world, something that algorithms absolutely cannot do, is to limit the ability of advanced intelligence. It may be cut from parts of the Internet or from certain networks, for example.
The new study also rejected the idea, saying it would limit the access of artificial intelligence – the argument arose that if we weren’t going to use it to solve problems that were beyond human boundaries, then why create them?
If we are going to move forward with artificial intelligence, we may not even know when an advanced intelligence beyond our control has arrived, that is, incomprehensibility. That means we have to start asking some serious questions about the direction we are going.
“The smart machines that control the world look like science fiction,” said Manuel Cebrian, a computer scientist at the Max-Planck Institute for Human Development, “but there are separate machines that perform some important tasks that programmers don’t understand how they learn.”
“The question therefore arises that at some point it might become out of control and dangerous for humanity.”
The research is published in Artificial Intelligence Research Journal.