T
he Global Risks Report 2024 put out by the World Economic Forum (WEF) cites misinformation and disinformation as the biggest short-term risks to sustainable development and efforts to create safe societies. It also warns of how artificial intelligence (AI) will accelerate the proliferation of mis- and disinformation in the years to come.
It predicts that the dangers of this phenomenon will persist for the next ten years, by which time it will rank fifth among the top ten global risks after extreme weather events, critical change to Earth systems, biodiversity loss and ecosystem collapse, and natural resource shortages. It is followed by the adverse outcomes of AI technologies, involuntary migration, cyber insecurity, societal polarisation, and pollution.
The risks of mis- and disinformation stem from two main factors. The first is the growing ability of its disseminators to impact a particular country or issue very rapidly, sometimes in hours or even minutes. A few decades ago, it would have taken three to four years for the effects of mis- and disinformation to produce the equivalent effects. The increased speed and magnitude of such impacts are the result of advances in digital technologies and an increase in the number of social-media platforms and channels.
The second factor is evidenced in the many instances in which mis- and disinformation campaigns have influenced target audiences in ways that undermine national security and sustainable development efforts and complicate efforts to control transnational organised crime such as money laundering and human trafficking.
The report lends weight to the decisions of governments around the world to develop policies designed to enable them to manage the effects of mis- and disinformation campaigns in ways that safeguard their national interests and prevent the erosion of trust between them and their citizens. The countries of the Middle East and North Africa (MENA) region are no exception to this trend.
Many MENA countries have adopted various policies in this regard. For some, the responses to the campaigns are also an avenue to strengthen their regional influence, while to others they offer a means to develop their foreign policies in line with the demands of the 21st century. A third response has been to attribute mis- and disinformation campaigns to conspiracies aimed at weakening societies and turning people against their rulers.
The adverse impacts of the spread of mis- and disinformation relate not only to the extent to which a country is exposed to them, but also to the level of awareness among digital platform users of the risks of engaging with misleading information. This is perhaps the most crucial dimension, as the general rule for measuring the success of a mis- or disinformation campaign resides in the response it meets among target audiences and the changes it produces in their ideas, attitudes, and behaviours.
While governments are the most affected by the adverse impacts of mis- and disinformation campaigns and are also the main parties responsible for countering them, they alone are unable to keep pace with the quantum leaps in the spread of mis- and disinformation made possible by the proliferation and increasing power of AI applications, which are making the processes of tackling this threat increasingly complex.
Other parties possess significant capacities and expertise they could contribute to the task. They include the tech firms that own the social media platforms through which so much mis- and disinformation is transmitted and the “influencers” who, wittingly or not, transmit misleading information to large numbers of followers.
Mis- and disinformation campaigns have become part of the new normal of the 21st century. If in the past states could be the main agents in such campaigns, today the players include individuals, private companies, multinationals, organised crime networks, and extremist and terrorist entities.
This new reality calls for effective policies based on a “whole society approach.” This is the best framework for organising a constructive partnership between the government and other key players in the lifecycle of disinformation campaigns from tech firms and individual influencers to research institutes and universities, media and news organisations, and civil society organisations.
Such policies should aim to achieve several main objectives, one being to raise awareness among those concerned with national security policymaking about the risks of disinformation. The second is to remedy the information deficiencies that have helped to create the space and environment conducive to the proliferation, allure, and influence of mis- and disinformation.
A third objective is to develop a social media governance system that strikes a balance between ensuring freedom of expression and the protection of national security. A fourth is to devise and implement programmes to educate the public about what mis- and disinformation is and the dangers of its spread or of participating in it.
The writer is director general of the Regional Expertise Centre for Combating Drugs and Crime at NAUSS.
* A version of this article appears in print in the 18 January, 2024 edition of Al-Ahram Weekly
Short link: