The open letter that a series of personalities from the technological world, including Elon Musk and Steve Wozniakpublished last week, demanding a six-month pause in the training of new artificial intelligences while security protocols are developed, has provoked a wide debate. This conversation has been joined Eliezer Yudkowskiconsidered one of the leading experts in artificial intelligence, a field that he has been researching for more than 20 years, through an article published in Time magazine under the title “Pausing AI developments is not enough. We need to close everything”.
The AI and ethics researcher explains the reasons why he did not sign the letter and presents his vision of what we can find if a compromise is reached. General Artificial Intelligence in the current circumstances and what should we do to avoid that “all biological life on Earth dies shortly after”.
An AI better than humans
Yudkowsky co-founded in 2000 the MIRI Machine Intelligence Research Institute and is one of the pioneers and benchmarks in the field of friendly artificial intelligence. This is the idea that artificial intelligence should be designed to be compatible with human values and goals.
In his article, he appreciates the demand for a six-month break “because it is better than not having any”, but he considers that the signatories underestimate the seriousness of the situation and “they ask too little to solve it”. It defends that it is necessary stop all artificial intelligence training that is being carried out, indefinitely and throughout the world, until it is possible to safely develop these types of systems.
Musk and the rest of the signatories noted their concern that artificial intelligence tools were reaching a level where they were being “competitive with humans”. For Yudkowsky, the problem is not this, but that it will come when it is reached. an artificial intelligence better than the natural one of humans.
“The key thresholds (to get there) may not be obvious. We definitely cannot calculate in advance what happens and when, and currently it is imaginable that a research laboratory could cross critical lines without realizing it“, Explain.
Like the australopithecus trying to outdo homo sapiens
The result of building a truly intelligent artificial intelligence “under anything remotely resembling the current circumstances, is that literally everyone on earth will die. Not as in “maybe, possibly, some long shot” but as in “that’s the obvious thing that would happen”.
Avoiding it requires “precision, preparation and new scientific knowledge”. Without them, you will most likely end up with an AI that “do not do what we want and do not care about us or sentient life in general”.
It is not a problem, “in principle”, unsolvable. This kind of perspective in which life has a value could be imbue in artificial intelligence systemsbut the problem is that “we are not prepared and we don’t know how to do it”. Without this approach, artificial intelligence would see us as resources for its purposes. It would be an AI”he doesn’t love you or hate you, you’re made of atoms that he can use for something else”.
For Yudkowsky, the result of humanity facing off against a higher intelligence would be disastrous. Like “the 11th century fighting the 21st century or the australopithecus trying to outdo homo sapiens”.
What would a hostile AI look like?
He envisions a hostile, superior to human AI as “a completely alien civilization, thinking at millions of times human speed, initially confined to computers”, something that would not be maintained for too long.
In a world where you can “email DNA strands to labs that will produce proteins on demand, allowing an AI initially confined to the internet build artificial life forms or jump right into post biological molecular manufacturing”.
The researcher criticizes that Open AIthe developer of ChatGPT You are currently training your next language model, GPT-5, plan “to have some future artificial intelligence do the AI alignment task.” This alignment is the extent to which the goals and actions of an artificial intelligence system match the goals and interests of its designers or users. That is to say, that artificial intelligence does what we want it to do and not what we don’t want it to do.
A more powerful cognitive system
Shares the view that more advanced AIs, such as those using the GPT-4 language model, they are not self-aware but only come to imitate this condition using the information with which they have been trained, but that “we don’t know for sure”.
If the jump to GPT-5 is as big as the one from GPT-3 to GPT-4, “we can no longer say that they probably don’t have consciousness. If we let them make artificial intelligences like GPT-5, we will say that we do not know, that nobody knows”.
However, he also states that the problem is not necessarily a self-aware AI, something that “we can’t determine either”, but rather the danger is intrinsic when facing a more powerful cognitive system.
“Turn it all off”
Yudkowsky recommends stop completely artificial intelligence training in all countries as long as sufficient security is not achieved, turn off GPU computing centers that they use, track GPUs that are sold to prevent a country from building one of these centers, put a limit to computing power which can be used to train an AI and reserve them to very specific fields and without connecting them to the Internet.
“We are not ready. We are not on track to be significantly more prepared for the foreseeable future. If we go through with this, everyone will die, including the children who didn’t choose this and did nothing wrong. turn it all off”, concludes Yudkowsky.
Discussion about this post