AI & IT security

Mohamed Abdel-Wahed
Thursday 26 Sep 2019

Egypt must revise its national security safeguards in the light of developments in information technology and artificial intelligence and the evolving nature of today’s cyber-threats, writes Mohamed Abdel-Wahed

Information technology (IT) is in the midst of an unprecedented boom. We see it in the evolution of everything from smart phones to artificial intelligence (AI) applications: mobile phones linked to the Internet, unmanned aircraft systems, automated guide vehicles (AGVs), advanced robotics, “cloud computing”, the Internet of Things (IoT), advanced manufacturing technologies such as 3D printing, and much more. 

Without a doubt, the lion’s share of this progress is directed towards military applications that governments vie with one another to acquire and develop in order to build their strength, forge their identities and acquire greater leverage in the intricate weave of international interplay. The surge in the development of unmanned military hardware for use on land, sea and air will alter the nature of military warfare and in the process reshape military doctrine.

Information has become more important than conventional weapons because it can sometimes achieve results that conventional weapons cannot. As a consequence, it is a crucial component of national security. Governments and intelligence agencies around the world have come to realise that virtual space and applications that use it such as mobile-phone applications, electronic mail systems, Internet sites and cloud-stored databases have become a new realm for information harvesting and analysis. 

Moreover, the breathtaking progress that has taken plane in artificial intelligence now offers the computer and cyber-systems necessary to process and analyse vast and complex amounts of data that are too large for traditional data-processing applications. Armed with new technologies in the emergent field of “big data” analysis, governments and intelligence agencies can capture, analyse, process and even shape information in ways that enable them to achieve political, economic or social ends. 

Turning to artificial intelligence, this entails programming computers in ways that resemble human intelligence, imitating human cognitive processes and behaviour, and even sometimes outstripping the human intellect in such areas as computational and data-analysis skills. Precisely because it seeks to simulate human thought and behavioural processes so as to replace humans in a number of areas, AI research is a very large, complex and high-tech field that engages complex disciplines like neurology and psychology. It covers logic, cognitive processes such as deduction and induction, emotional responses, planning and forecasting, knowledge acquisition and innovation, language and communication, locomotion and steering the motion of other things. 

Responding to new situations, solving problems, managing crises, forecasting events, answering questions in a particular field of knowledge, making plans, producing concepts and future visions are among the mental processes that only the human mind can currently do. However, more and more AI is being designed not only to help perform but also to take over the performance of these processes. 

Over the past few years, AI has undergone advances that almost defy belief. A machine programmed with the rules of logic and other parameters and fed with the available data on a certain issue can analyse the information, make connections, chunk information, draw deductions and inferences, and make predictions, often more precisely and faster than the human mind. The world has thus moved from the “data era” to the “knowledge era”. Information on its own means little. What counts is using it to produce knowledge, which is something that can be put to use.

Research centres and companies involved in AI aim to produce expert systems using AI technologies that engage in “rule-based systems” functioning in accordance with specified protocols and parameters. These expert systems are now used to perform tasks that would normally require a human expert and include a multiplicity of diagnostic, problem-solving, planning and decision-making tasks. 

Such systems are now applied in such fields as medicine, geology, psychology, political science, sociology, crisis management and natural-disaster forecasting. They are also used to formulate future scenarios, decision-making alternatives and other such areas that help minimise the factors of surprise, risk, the shortage of time, insufficient information and other characteristics of a crisis.

AI applications are increasingly seen everywhere in all facets of human activity. They help societies find solutions to problems and facilitate administrative work. No one can deny the benefits that have been brought by the “digital economy” and how this has helped in the development of economic management systems, the realisation of sustainable development goals, and increasing exchanges in goods and services due to the reduced prices made possible by the Internet and Internet commerce. 

Thanks to the inexpensive costs of “electronic commercial exchanges” or “e-trade,” small and medium-sized firms can now compete with larger conventional export companies. AI applications can also help governments centralise economies, strengthen the state’s role, and develop policies based on more accurate and more rapid market analysis and forecasting.

 

FEARS OF AI: Despite its benefits, some Western critics have claimed that AI technology can foster “digital authoritarianism,” warning that it could reinforce authoritarian and despotic regimes in the Third World by making it easier for them to censor, eavesdrop on, monitor the activities of, track the money flows and spy on people at lesser cost. 

Another common concern is that robots taking the place of people at work and elsewhere could increase unemployment. Others counter that AI does not seek to “replace” human beings but rather to enhance human capacities and increase the value of human contributions. 

Despite such criticisms, AI technology is now a significant component of the overall power of the state, and we see it in play in the current rivalry between the US and China in particular, though other major competitors include Russia, Singapore, Israel and South Korea. These and other powers are racing to obtain cutting-edge technologies to use for military and commercial purposes but also for intelligence purposes in the international cyber-wars that we are seeing today. 

These cyber-wars are similar to the former Cold War in that actual military confrontations do not come into play. Also described as “invisible wars,” they are not subject to international rules and conventions and are difficult to control. The US Defence Department classifies cyber-space and the Internet in particular as a fourth arena of war after land, sea and air. The British government regards cyber-attacks as one of the four greatest threats to the UK. China is believed to have a cyber-brigade of more than 100,000 cyber-soldiers, for example. Every year, the US conducts a “cyber-storm” exercise, in which most US military and security agencies take part, to test its preparedness for hostile electronic attacks. 

Cyber-weapons, which rely on an array of highly sophisticated electronic equipment and communications systems, are also inexpensive, especially when compared to conventional weapons. Equipped with the relevant know-how, they are easy to use, can be used anywhere and are very difficult to detect. They can also be used by anyone from hackers and cyber-thieves to terrorists and security agencies. The come in the form of a large variety of computer viruses and malware that can, for example, access information stored on computers, wipe out hard disks, cause software malfunctions and crashes, disrupt computer networks, gather personal information on people surfing the Internet, and commit identity theft.

AI, especially given its growing capacities for automated learning, has the power to combat cyber-attacks even as the malware evolves and the algorithms it uses mutate. Given the increasing dependency of so many human activities on technology and, above all, on the growing world of the Internet of Things (IoT), IT researchers are now in a perpetual race to produce more and more sophisticated safeguards against cyber-attacks. 

It is important to remain wary of the false sense of security that such safeguards can sometimes give, however. Antivirus programmers need to continually test such safeguards for potential vulnerabilities because of the constantly evolving nature of the threats. 

Consider, for example, a scenario in which a cyber-pirate manages to hack into a company’s encryption system and reconfigure it so as to identify a malware algorithm as benign. The company’s entire store of confidential information about its products, finances, staff and customers could be at severe risk as a result.

 

CYBER-SECURITY: The new and evolving digital world has long since put paid to conventional approaches to cyber-security, especially those heavily dependent on the human factor because of the sheer enormity of the quantities of data flow in today’s world and the mutability and persistence of the innumerable threats. 

This is why safeguards need to utilise AI’s rapid, automated learning and high-precision capacities to protect governments and vital institutions from the threat of cyber-attacks. 

Despite the inroads that have been made in AI safeguards against the second generation of threats, further progress is needed to meet the cyber-security needs of today’s world. In the meantime, there are a number of ways in which governments can act now to meet such threats, notably through:

- Campaigning to raise the awareness of the media and the public of the dangers of cyber-warfare;

- Building a national research and development system capable of acquiring and producing the necessary cyber-security technologies locally so as to address potential threats from foreign state or non-state actors; 

- Offering incentives to local companies to develop their cyber-security products and sell them to government organisations and companies; 

- Stepping up intelligence efforts to access confidential information and research on the development of IT technologies related to cyber-security; 

- Increasing activities to monitor and identify criminal cyber-activities in collaboration with international organisations involved in cyber-defence;

- Signing cooperation protocols with governments that are more advanced in the field in order to exchange information and benefit from their know-how and expertise;

- Working to promote an international non-proliferation agreement for cyber-weapons along the lines of the Nuclear Non-Proliferation Treaty. 

Egypt, like many other countries, must revise its concept of national security in the light of the new and evolving nature of today’s cyber-threats. A modern concept of national security must be consistent with a contemporary understanding of the comprehensive sources of national strength, in which context “cyber-power” is acquiring increasing importance alongside other forms of “soft power.” This concept must simultaneously address problems related to the relationship between modern technologies and national sovereignty and national security. 

Cyber-wars follow completely new laws. A cyber-attack can completely paralyse an adversary’s IT infrastructure and communications networks, inflicting untold military and economic losses. But cyber-space is also an arena for conflicts over values and principles in which information on population groups is harvested and processed in order to devise ways, also using IT technologies and platforms, to influence those populations and steer them in certain directions. These could include eroding their confidence in themselves and their government and other pernicious forms of brainwashing. 

However, despite their drawbacks, it is important to bear in mind the bright side of the new technologies, which also have the potential to solve major world problems, improve standards of living and make human life better for all. For such reasons, as well as for others, Egypt should do its best to catch up in the IT and AI research and development race, and it would be very wise to allocate a respectable budget towards doing so.

*The writer is an expert on national security affairs.

 **A version of this article appears in print in the 26 September, 2019 edition of Al-Ahram Weekly

Short link: