A war on awareness: An Egyptian study maps the frontlines of cognitive warfare

Ahram Online , Thursday 26 Mar 2026

A new study by Justice Adel Maged, published by the Information and Decision Support Center of the Egyptian Cabinet on 18 March 2026, argues that artificial intelligence has shifted the center of conflict from territory to perception, making human awareness itself a critical domain of national security.

==

 

Over the past two decades, the nature of conflict has undergone a profound transformation. Wars are no longer defined solely by geography, military power, or access to resources. Instead, they increasingly unfold within a less visible but more consequential domain: the human mind. At the center of this transformation lies artificial intelligence, whose rapid expansion has redefined not only how information circulates but also how it is perceived, interpreted, and internalized.

Justice Adel Maged’s study, released as part of an expert series by the Egyptian Cabinet’s Information and Decision Support Center, enters this evolving debate with notable depth and urgency. Positioned at the intersection of law, security, and technology, the study offers one of the earliest comprehensive Arab analyses of what has come to be known as “cognitive warfare”—a form of conflict that targets perception itself.

Drawing on a multidisciplinary framework that includes cognitive psychology, neuroscience, sociology, and artificial intelligence, the study advances a central argument: in the digital age, safeguarding awareness is no longer a cultural or educational concern alone but a matter of national security. As Maged puts it, “in the age of artificial intelligence, rumors and disinformation may become more dangerous than ammunition itself.”

This premise anchors the study’s broader exploration of how digital technologies, particularly algorithm-driven systems, are reshaping both individual and collective awareness. It is a shift that demands new conceptual tools—and new policy responses.

At the core of the study is a careful distinction between “the mind” and “awareness.” The mind, as defined here, encompasses processes of reasoning, analysis, and inference. Awareness, by contrast, refers to the individual’s perception of self and environment. The relationship between the two is dynamic: awareness filters what is noticed, while the mind processes that information into beliefs and judgments.

Artificial intelligence, the study argues, does not directly intervene in rational thought. Instead, it operates through what Maged describes as an “emotional gateway.” By directing attention, shaping perception, and activating latent biases, AI systems gradually influence how individuals interpret reality. The resulting beliefs often appear self-generated, even as they emerge from algorithmically curated environments.

This process unfolds in identifiable stages. It begins with the collection and analysis of behavioral and emotional data. From there, digital systems apply techniques such as cognitive priming and emotional stimulation, subtly preparing individuals to receive information in specific ways. Over time, repeated exposure to tailored content reinforces certain interpretations, producing what psychologists call the “illusory truth effect”—the tendency to accept repeated information as true.

What is striking in this model is that reality itself remains unchanged. What shifts is the individual’s experience of it. In this sense, cognitive warfare does not distort facts as much as it reorganizes the frameworks through which facts are understood.

The implications extend beyond individuals to what the study terms “collective awareness”—the aggregated perceptions of society as a whole. When large groups are exposed to similarly structured information environments, shared interpretations emerge, often amplifying polarization and eroding consensus.

The study’s second major contribution lies in its detailed mapping of the technological tools involved in this process. Artificial intelligence systems are no longer passive conduits of information; they actively analyze behavior, predict preferences, and deliver highly personalized content. Among the technologies highlighted are natural language processing, data mining, deep learning algorithms, generative AI systems, deepfake technologies, and psychographic profiling.

These tools, when misused, can facilitate the spread of disinformation, intensify social divisions, and undermine trust in institutions. Yet the study is careful not to frame artificial intelligence itself as inherently dangerous. The risk, it argues, lies in how these technologies are deployed—particularly when used to manipulate cognitive environments in systematic ways.

A key insight here concerns the shift from overt to covert forms of influence. Traditional propaganda relied on visible messaging. Today’s digital systems operate more subtly, embedding influence within the architecture of platforms themselves. Through what is known as manipulative user interface design, users are guided through carefully structured digital experiences that shape attention and behavior without explicit awareness.

This influence often operates below the threshold of conscious perception. Content is sequenced, framed, and repeated in ways that gradually steer emotional responses and preferences. Over time, these patterns become internalized, shaping how individuals interpret new information.

In this sense, artificial intelligence does more than transmit messages—it constructs the environment in which meaning is formed. This capacity to influence cognition at a pre-conscious level is what places these practices within the broader framework of cognitive warfare.

The study’s third section shifts focus from analysis to implications, particularly within the Egyptian context. Here, the concept of digital sovereignty emerges as central. The ability to control and secure a nation’s information environment is increasingly tied to its capacity to protect collective awareness from external manipulation.

Among the risks identified are the spread of narratives that conflict with national cultural values, the amplification of social polarization, and the empowerment of extremist ideologies. These dynamics, the study suggests, can erode social cohesion and weaken trust in public institutions.

Cognitive warfare, as defined in the study, represents a departure from traditional forms of conflict. It does not aim to occupy territory or destroy infrastructure. Instead, it seeks to shape perception, influence decision-making, and ultimately alter the social fabric from within. Drawing on NATO research while adapting it to national contexts, Maged frames this form of warfare as one of the defining challenges of the contemporary security landscape.

Yet the study does not adopt a purely alarmist tone. It emphasizes that the response to these challenges should not involve restricting technological development. Rather, it calls for balanced governance frameworks that enable societies to harness the benefits of artificial intelligence while mitigating its risks.

Central to this approach is the concept of “cognitive resilience," the capacity of individuals and societies to critically engage with information and resist manipulation. Building such resilience requires a combination of policy measures and cultural shifts.

Among the recommendations outlined are the expansion of digital literacy programs, the development of regulatory frameworks governing AI use, the strengthening of digital sovereignty, and the promotion of interdisciplinary research into the relationship between technology and cognition. The study also highlights national initiatives such as the Ministry of Communications’ “wa3i.net” platform, which aims to enhance digital awareness, particularly among younger generations.

These measures, while varied, share a common objective: to ensure that citizens remain active interpreters of information rather than passive recipients.

In its concluding sections, the study returns to its broader intellectual contribution. By integrating insights from multiple disciplines, it offers a nuanced framework for understanding how technological transformations intersect with cognitive processes. It moves beyond treating artificial intelligence as a purely technical phenomenon, situating it instead within the wider context of societal stability and national security.

This approach is particularly valuable in the Arab academic landscape, where systematic studies of cognitive warfare remain limited. By addressing both conceptual and practical dimensions, the study lays the groundwork for future research and policy development.

Equally important is its emphasis on methodological clarity and conceptual precision. In a field often marked by vague terminology and speculative claims, Maged’s work provides a structured vocabulary for analyzing the interplay between technology and awareness.

Ultimately, the study underscores a central paradox of the digital age. The same technologies that expand access to information also create new pathways for its manipulation. The challenge, therefore, is not to resist technological change but to shape it in ways that preserve the integrity of human awareness.

In this sense, the study serves as both a warning and a roadmap. It highlights the vulnerabilities introduced by artificial intelligence while outlining the steps needed to address them. For policymakers, researchers, and the broader public, it offers a timely reminder that the future of security may depend as much on protecting minds as on defending borders.

As cognitive warfare continues to evolve, the questions raised by this study are likely to become more pressing. How can societies maintain a shared sense of reality in increasingly fragmented information environments? What safeguards are needed to ensure that technological innovation does not come at the expense of cognitive autonomy? And perhaps most importantly, how can awareness itself be protected in an age when it has become the primary target of conflict?

The study does not claim to provide definitive answers. But it succeeds in framing the problem with clarity and urgency, an essential first step in confronting one of the most complex challenges of our time

Short link: