On 27 March, five days after the Crocus City Hall massacre in Moscow, an Islamic State media platform broadcast a 92-second video showing footage of the attack and a news anchor stating that it was part of “the natural context of the ongoing war raging between the organisation and the countries at war against Islam.” SITE Intelligence Group, a US consultancy group that tracks the online activity of terrorist and extremist groups, has determined that news anchor was an artificial intelligence (AI) “fake”.
Not long before this, on 9 February this year, the Islamic Media Cooperation Council (IMCC), an Al-Qaeda-affiliated media group launched in September 2023 to enhance the quality of jihadi media production, held a workshop on AI. The purpose, according to the group’s announcement, was to strengthen the use of AI applications in media and other fields.
The foregoing are signs of a future in which terrorism and AI converge, raising concerns that have led governments to recognise the need to monitor and regulate AI platforms. Accordingly, in May, the EU formally adopted the Artificial Intelligence Act. The “first comprehensive regulation on AI by a major regulator anywhere,” as the AI Act website notes, seeks to strike a balance between creative freedom and security.
Terrorist organisations are eager to exploit the potentials of AI for military and mobilisational purposes. A notable example is how AI applications have been incorporated into the drones used by terrorist groups such as the Houthis, Boko Haram, Hizbullah and IS, for intelligence, surveillance, reconnaissance and multiple target acquisition.
In 2021, the UN Office of Counter-Terrorism released a report titled “Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes.” The report discusses two main categories as potential threats related to terrorist use of AI. One is cybersecurity threats, such as ransomware attacks, denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks, and website defacement. The other involves physical threats using autonomous vehicles, drones with facial recognition and possibly genetically targeted bioweapons.
AI has also been increasingly used to disseminate propaganda and disinformation, using “deepfake” technology. The phenomenon has contributed significantly to the spread of suspicion and doubt in contemporary societies, and it has done much to undermine the credibility of conventional media. It is believed that terrorist organisations will use deepfakes to generate states of confusion and uncertainty, manufacture false consensus, and undermine trust in government.
Recent advances in AI technology and the emergence of increasingly sophisticated generative programmes such as ChatGPT, described as the fastest growing app in history, have given rise to concerns that terrorist organisations could also take advantage of these tools to develop and expand their operations and spread their ideas.
In September last year, the Global Internet Forum for Countering Terrorism (GIFCT) released a report titled “Considerations on the Impact of Generative Artificial Intelligence on Terrorism and Extremism Online.” The report highlights the potential threats posed by extremist use of generative AI, with swindling and social engineering foremost among them. With its remarkable power to craft highly realistic sounding text, to the extent mimicking the writing styles or linguistic idiosyncrasies of certain groups or individuals, ChatGPT could easily be exploited for fraudulent purposes. Combined with other AI marketing tools, it could be programmed to target and tailor content to specific audiences, rendering it a powerful propaganda and recruitment tool for terrorist organisations. US government agencies have been warning of potential terrorist use of generative AI for some time. In October 2023, FBI Director Christopher Wray revealed that the FBI had evidence the use of AI “to amplify the distribution or dissemination of terrorist propaganda.”
On the other hand, in November 2023, Tech Against Terrorism released a report titled “Early Adoption of Generative AI by Terrorists,” which “found little evidence that generative artificial intelligence (AI) services are being systematically exploited by TVEs [terrorist and violent extremist actors]”, suggesting that their “engagement with generative AI is likely to be in its experimental phase.” However, it stressed, “these experiments do indicate an emerging threat of TVE exploitation of generative AI, in the medium to long term, for the purposes of producing, adapting, and disseminating propaganda.”
Meanwhile, AI applications have great potential for use in counterterrorism efforts. A report titled Countering Terrorism Online Using Artificial Intelligence,” issued by the UN Office of Counter-Terrorism in 2021, discussed how. First, with its powers to crunch massive quantities of data on people and behaviours, it can be used predictively, enabling intervention before a terrorist attack occurs. Second, with its ability to monitor online behaviour, it can identify individuals at risk of extremist indoctrination and recruitment, thereby facilitating prevention. Third, it can identify and expose the disinformation spread by terrorist groups and, fourth, erase extremist content from the internet to forestall the spread of hate speech.
However, such potential uses have stirred alarm among human rights advocacy groups. In August 2019, a Chatham House report warned how government agencies’ use of AI technologies would expand their realms of public surveillance, tracking and harvesting people’s private internet use, banking transactions, flight reservations, and other personal information, in violation of basic civil and human rights. The report therefore stressed the need for sufficient guarantees that the government’s use of artificial intelligence will not serve to infringe on citizens’ right to privacy and other fundamental rights.
Clearly, AI is a double-edged sword. Its potential advantages in economic, social, security or miliary domains are possibly innumerable. However, its applications can also be used for the malign purposes of extremist organisations and non-state actors. Mitigating the risks requires a coordinated collaborative effort by AI tech producers, civil society, academics, government agencies, and other stakeholders. Together, they should devise proactive solutions to address the threats while educating the public on the measures they can take to protect themselves against various risks. After all, awareness is crucial when it comes to using any new technology effectively and safely.
The writer is an international terrorism researcher at the Egyptian Centre for Strategic Studies (ECSS)
* A version of this article appears in print in the 4 July, 2024 edition of Al-Ahram Weekly
Short link: