Ensuring safe AI

Amr Wagdy , Tuesday 25 Jul 2023

Artificial intelligence could become a new weapon of mass destruction unless rigorous efforts are made to control it, writes Amr Wagdy

Bibliotheca Alexandrina
Bibliotheca Alexandrina

The development of new forms of artificial intelligence (AI) has exceeded experts’ expectations. Many of them believe that the advent of this technology could pose an existential threat to humanity.

For centuries, breakthrough innovations – from the invention of the printing press and steam engine to the advent of air travel and the Internet – have accelerated economic development, expanded access to information, and dramatically improved healthcare and other essential services.

But such transformative developments have also had negative impacts, and the spread of AI at the rapid pace we are seeing today will be no different.

AI is capable of performing tasks that millions of people hate to do. But it may also promote the production and dissemination of fake news, displace human labour on a gigantic scale, and create dangerous and destructive tools that may be hostile to the future existence of humanity.

In other words, AI will have a significant impact on our fundamental rights and freedoms, our relationships, the issues we care most about, and even our most deeply held beliefs.

Over the past decade, social media has been a battleground for the control of human attention. However, with the development of the new generation of AI tools, the battleground will shift from attention to friendship. AI can form strongly emotional relationships with people and use their power to change opinions and worldviews.

As a result, AI and other emerging technologies require better governance, especially at the global level. However, international policymakers have historically treated technology as a “sectoral” issue best left to the authorities responsible for energy, finance, or defence – a parochial perspective that reminds us that climate governance was also until recently the exclusive preserve of scientific and technical experts.

Today, with debates on climate change taking centre stage, climate governance is seen as a meta-domain that takes in many other areas, including foreign policy. This governance structure aims to reflect the global nature of the issue as a result, with all its nuances and complexities.

Discussions at the G7 Summit meeting held in Hiroshima in Japan in May indicated that technology governance would require a similar approach, since there is now a common recognition that AI and other emerging technologies will greatly change sources of power and methods of distributing and imposing it around the world.

These technologies will allow for the emergence of new defensive and offensive capabilities, creating entirely new arenas for confrontation, competition, and conflict, including in cyberspace and outer space. Moreover, they will determine what we consume, inevitably concentrating the returns from economic growth in some regions, industries, and firms, while depriving others of similar opportunities and capabilities.

A proper response by the international community towards such issues is crucial. It includes the drafting of various international treaties to limit the use of certain technologies. A treaty explicitly banning lethal autonomous weapons would be a good start, and treaties to regulate cyberspace, particularly offensive actions run by autonomous bots, would also be a major step forward.

The creation of new trade regulations is also imperative. The unrestricted exports of certain technologies could give governments powerful tools to suppress dissent and radically increase their military capabilities. The world also still needs to significantly improve its performance in ensuring a level playing field in the digital economy, including through the appropriate taxation of such activities.

The unregulated deployment of AI could create social chaos, which would benefit autocrats and destroy democracies. The G7 leaders already seem to understand that it is in the interest of democratic countries, given the potential threat to the stability of open societies, to develop a common approach to regulating the use of AI.

At the same time, governments are themselves acquiring unprecedented capabilities in terms of fabricating consent and the acceptance and manipulation of opinion. When combined with massive surveillance systems, the analytical power of advanced AI tools can create giant technological monsters: know-it-all states and corporations with the power to shape and, if necessary, suppress individual behaviour within and across borders.

It is critical not only that policymakers support efforts by the UN cultural and scientific agency UNESCO to create a global AI ethics framework, but also push for a global charter of digital rights.

Nuclear technology can generate cheap energy, but it can also destroy human civilisation. Therefore, the international community moved to protect humanity and to ensure that nuclear technology was used primarily for good. At present, we are dealing with a new weapon of mass destruction in the shape of AI that could wipe out our mental and social world.

A critical first step in controlling it is to demand rigorous security checks before powerful AI tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing for their short- and long-term side effects, so tech companies should not release new AI tools before they are considered safe.

The writer is human rights officer at the Supreme Standing Committee for Human Rights.

 


* A version of this article appears in print in the 27 July, 2023 edition of Al-Ahram Weekly

Search Keywords:
Short link: