Technology
Danish Kapoor
Danish Kapoor

Global opposition to superintelligence in artificial intelligence grows

The direction of artificial intelligence, as well as its current state, has been discussed for a long time. However, this time, technical issues are not only at the center of the discussion. Objections from different segments of society call into question much more comprehensive and risky goals such as super artificial intelligence. In this context, the declaration signed by more than 800 people stands out as the embodiment of these concerns.

Technology experts are not the only ones who signed the declaration. There is a wide range of people, from Nobel Prize-winning scientists to former government officials, from artists to strategy consultants. The fact that different names such as Steve Wozniak, Prince Harry, Geoffrey Hinton and Steve Bannon signed the same text shows that the issue has moved to a level of concern above politics. What these different groups have in common is the belief that the uncontrolled development of superintelligence could have serious consequences for humanity. On the other hand, the declaration contains a clear call: These studies should be banned until broad scientific consensus and public support is achieved.

As super artificial intelligence develops, the issue of control becomes more controversial

Future of Life Institute, the owner of this call, is one of the important organizations that draw attention to the social effects of technology. Anthony Aguirre, the director of the institution, emphasizes that the process was shaped without the knowledge and approval of the society. According to him, the development path of technology is largely determined by companies, which leaves the public out. Although the statements of technology pioneers seem promising, the fact that these developments progress without being based on a social basis creates anxiety in many circles. As Aguirre puts it, “No one was asked whether they really wanted this process.”

The point that artificial intelligence has reached so far has generally been limited to achieving success in narrow-scope tasks. In particular, advances have been made in the areas of autonomous vehicles and natural language processing; but there are still significant limitations on complex tasks. The goal of superintelligence aims to go beyond all these limits. We are talking about systems that can think faster and analyze better than human experts. Precisely for this reason, the question of how long control can be maintained becomes critical.

On the other hand, while these discussions continue, large technology companies continue their work without slowing down. OpenAI, Meta and similar companies support superintelligence-targeted projects with investments worth billions of dollars. Mark Zuckerberg says that this level has now become “visible”. Elon Musk states that superintelligence is currently developing in “real time”. However, none of the executives of these companies are included as signatories in this declaration.

This situation suggests that the distance between technology companies and society is widening. It is clear that there is a serious lack of information in the public regarding these developments. However, the fact that technology actors continue to pursue their own agenda increases the dose of criticism. Scientific communities want the process to be made transparent and auditable. Otherwise, the disconnect between technological progress and social acceptance may deepen further.

Another open letter published in recent months contained similar warnings. The signatories of that letter focused on the direct effects of existing artificial intelligence systems rather than superintelligence. Issues such as unemployment, loss of privacy, climate change and human rights were brought to the agenda. The common point in both texts is the determination that the pace of artificial intelligence development exceeds society’s capacity to keep up with these changes. Therefore, not only technological but also social and ethical considerations must become part of the process.

In addition to all this, some economists doubt the sustainability of investments in this area. It is stated that the expectation bubble in the field of artificial intelligence carries the risk of bursting in a short time. If this happens, not only the tech sector but also the global economy could be shaken. As investments grow rapidly, the perception of risk in the market has also increased at the same rate. From this perspective, not only scientific but also financial responsibility is on the agenda.


Danish Kapoor