A large number of personalities from the technological world have published a joint letter in which they warn of the dangers of the current race for the artificial intelligence in which the big technologies are immersed. For this reason, they ask you to stop training AIs “more powerful than GPT-4” for a period of six months and that during this time “a set of security protocols shared for the design of advanced AIs” that guarantees that these systems are safe.
Among the more than 1,000 signatories of this open letter, published on the website of the non-profit organization Future of Life Institute, are personalities such as Elon Musk (CEO of Tesla, SpaceX and Twitter), Steve Wozniak (co-founder of Apple), Jaan Tallinn (co-founder of Skype) and Yuval Noah Harari (author of the bestseller Sapiens: From Encouragement to Men), among others.
In the text, the signatories acknowledge that “advanced AI could represent a profound change in the history of life on Earth”, but that “it must be planned and managed with the corresponding care and resources. Unfortunately, this level of planning and management is not happening”.
They understand that artificial intelligence systems already are competitive with humans in general tasks, so it is worth asking questions such as “should we let the machines flood our information channels with propaganda and falsehood? Should we automate all jobs, including the ones we’re happy with? Should we develop non-human minds that could eventually outnumber us, outsmart us, make us obsolete, and replace us? Should we risk losing control of our civilization?”
The letter notes that these questions should not be answered by technology leaders and that advanced AI systems should only be developed “once we are sure that its effects will be positive and its risks are manageable”.
For this reason, they ask artificial intelligence companies and laboratories to pause for at least six months the training of advanced AI systems that are more powerful than GPT-4. GPT-4 is the OpenAI language model used by the paid version of ChatGPT and the Bing search engine.
They request that this time be used for laboratories and independent experts to develop the aforementioned set of security protocols and that it be audited by other independent experts.
They also advocate the development of policies for AI governance including new regulatory authorities in this field, supervision and monitoring of advanced AI systems and the large data centers in which they operate, a system of watermarking to distinguish synthetic images from real ones, an ecosystem of auditing and certification, a liability mechanism for damages generated by AI, and public funding for research into the safety of artificial intelligences.
Discussion about this post