Is a new winter of artificial intelligence coming? At the turn of the last century, artificial intelligence was making uneven progress. From 1974 to 1980 it suffered a severe blow that virtually halted academic interest in the field. The reason was that, during the previous years, some rather serious projects had failed in their attempts to develop processes, such as machine translation. Fortunately, the field recovered, but in 1987 a second AI winter began. Another six years in which funding was scarce, with all that this entails for the progress of the discipline. Of course, we are not in the same scenario as when those winters arrived. Artificial intelligences do not stop improving and both the public and private sectors are investing serious amounts of money. The danger now is very different: the AI may die of success.
There have always been voices critical of artificial intelligence. Some exposed well-argued criticism, others were not ashamed to show that everything they knew about artificial intelligence was based on works of science fiction and the odd nightmare. In a way, it stands to reason that the criticisms were so disparate, both in arguments and in validity, because the concept of “artificial intelligence” was far from the public. However, now that artificial intelligence is becoming popular and we can all “enjoy” its incredible power, our experience is more homogeneous, debates are more in tune, and dissenting voices have an easier time organizing to expose the potential dangers of this technology. In fact, they have done so and have presented a letter with more than 1,000 signatures requesting a moratorium of at least 6 months on the development of these technologies.
What they ask is relatively simple to state: at least half a year during which advances in artificial intelligence are paralyzed so that we can focus our efforts on developing a set of security protocols for the development of AIs. However, the fact that the request is simple does not mean that putting it into practice is, quite the contrary. One of the biggest problems is that we are dealing with a terrain that is constantly changing and, as if that were not enough, we don’t know much about it either. The very nature of this branch of computer science makes its achievements somewhat opaque. We don’t always know how they work and, therefore, regulating them will be a complicated job on a theoretical level. Moreover, to this we must add another degree of complexity due to the political and economic implications. The interests of large companies will affect any attempt to create official security protocols.
In other words: six months in a really short period of time so that solid and agreed protocols can be developed. Above all, if we assume that, in order for them to have value, they must be imposed by some body of power, with all the bureaucratic slowness that this implies. Of course, the signatories of the letter are well aware of all this and therefore suggest a minimum of 6 months, not 6 months exactly. Now, why say “at least six months” and not “at least a year” or two? It would be more realistic, of course, but it would also make one of the great problems of this proposal clearer: the clash between idealism and a boiling market where every second is gold and, if 6 months are already a world, a year becomes completely unaffordable.
Right now AI companies are living the dream. They grow and thrive at breakneck speeds. And that is a safe bet for large corporations that have decided to invest (in any way) in this emerging sector. What’s more, many of them had spent years developing artificial intelligence and being the head of a sector, such as DeepMind or Open AI. Will these companies accept the moratorium? We will have to see under what conditions, because right now a week in the world of artificial intelligence is like several months in any other line of research. We can assume that for many companies that take advantage of the moratorium, even if it ends up being imposed by law, there will be others, perhaps in other parts of the world, that continue to investigate or, even worse: that decide to increase their resources in matters of security. artificial intelligence to take advantage of the situation to your advantage.
Six months with their hands crossed while a handful of companies put the turbo on their projects can mean multimillion-dollar losses. Of course, the situation is not so simple, because there are other ways to immediately make profitable what these companies have already been developing, even if they have to stop the release of new products, but we cannot ignore that, in the medium and long term, we are going back to run into a serious problem. All this surrounds the moratorium with an aura that is not very credible. A good deed, possibly with as much goodwill as little chance of success.
However, none of this means that citizens do not want a moratorium. Artificial intelligence is a sector that runs without brakes, like a runaway horse that runs without being very clear to where. Because, of course, companies know their goals, they know what improvements they would like to implement and they sense what results they will have in the performance of artificial intelligence. That is not the question, because when we talk about the direction they are taking, we are referring rather to a compass of values that guides these advances. We have lived through and survived several technological revolutions that promised to put us out of work. Possibly this case is similar, but what we cannot ignore, for example, is that there will be short-term consequences that will affect the labor market and, therefore, the well-being of many individuals.
What limitations should we apply to reduce this impact? Who owns the creation of an artificial intelligence? How should we regulate the creation of deepfakes? Should we classify new crimes related to the use of these artificial intelligences? Is it illegal to have absolutely realistic images of something that, if it hadn’t been created by an AI, would clearly be illegal? We are facing open questions or, at least, that the legislation has not completely closed. What will happen if we continue to advance technologically faster than the law can act?
In reality, many of these doubts are extreme evolutions of issues that have been around for some time in our society. These questions contain the essence of technological progress and we have to understand them from that perspective, for what they are. In fact, the great masters of science fiction have already warned us about most of these problems, but we wanted to think that they were talking about a very future time when, in reality, they have always talked about their own time, about the technological issues that they worry us more, no matter what century we live in.
DON’T GET IT:
The letter does not propose a true plan of action. It merely makes a series of well-intentioned claims about how things should be, but offers no mechanisms to actually make things work that way. In part, that is what is to be expected from a letter that aims to gather signatures from a large number of relevant personalities in this field: vague statements with which many can feel identified. Therefore, we have to understand this letter for what it is: an attempt to make visible that theoretically knowledgeable people in the field are concerned about a series of problems that the institutions are not addressing. Another question would be whether the signatories are, in each and every one of the cases, people with sufficient criteria to take their opinion into account-
REFERENCES (MLA):
Pause giant AI experiments: An open letter (2023) Future of Life Institute. Available at: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (Accessed: April 5, 2023). https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Discussion about this post