Artificial intelligence (AI) has become one of the most important and controversial topics in all parts of the world today, due to the rapid developments witnessed in the technology, especially with regard to tools that include ChatGPT and Google’s Bard that make it easier for people to access information.
There has also been the growing use of human voices and images that are artificially generated, such as through the use of holograms and other technologies. The emergence of electronic chips said by their promoters to be implantable in the body and to avoid the need to carry devices or personal data has concerned many, as they are seen as opening the door to possible electronic control and thus the tyranny of the machine over human existence.
The US McKinsey Global Institute issued a report in July entitled “Generative AI and the Future of Work in America” that focuses many contemporary concerns about developments in AI. According to the report, 30 per cent of working hours in the US economy could be exposed to automation, i.e. the use of software and computers, which will expose a number of jobs to a decline in number, among them customer service and food-service jobs.
The report says that US federal investment to address climate and infrastructure concerns will also affect the size of employment in the US, as this will lead to a decline in the oil and gas and automobile-manufacturing sectors.
Another report issued by the US Pew Research Centre, also last month, entitled “Which US Workers Are More Exposed to AI on their Jobs?” indicates the penetration of AI in many sectors in the US notably through the emergence of applications like ChatGPT. The report says that 19 per cent of US workers were exposed to AI tools in their jobs in 2022, using them in their work or to do important activities.
A report published by Reuters last month described the development of a social robot supported by AI in Geneva that could work in nursing homes for the sick and elderly. It performs human-like gestures and expressions, but is distinguishable from human workers in that it can be available throughout the day and does not require payment. Clearly, this robot threatens the human presence in this type of job.
Even the cinematic field has not escaped from the impact of AI, as has been evident in the decision of some actors and screenwriters in Hollywood to go on strike due to the threat of AI to their jobs, among other reasons. The situation has been made worse by the fact that Netflix, a media-streaming company, has announced a job for an expert in AI with a $900,000 salary, far more than the amount earned by human scriptwriters.
According to Reuters, the US Walt Disney Company has formed a working group to study how to apply AI in all departments of entertainment, as it looks forward to developing AI tools and forming partnerships with emerging companies specialising in the field. The company has also offered 11 job opportunities for those with experience in the field of AI and machine learning.
These jobs are related to every department of the Company, including studios and engineering groups, which indicates that the entertainment industry in the US may be about to lay off large numbers of human employees, raising fears in the industry of workers being dispensed with and the collapse of their professional futures.
In an interview with the site Audacy, UK singer Ed Sheeran has expressed his rejection of the penetration of AI into the music industry. He said that Hollywood films that have dealt with the issue of AI have denounced the replacement of human beings by AI tools and even the killing of people by robots.
This contradicts what is happening now, Sheeran said, where AI is being used more and more extensively. The main goal of society is for people to have jobs, he added, worrying that in future most people could be unemployed as they will have been replaced by AI.
AI RESEARCH: There is also an increasing reliance on AI in the preparation of scientific research and academic work, raising the question of whether it will be possible to dispense with human teachers and human researchers in future.
Human beings may suffer from bias or weakness, whereas AI does not get tired and can continuously update its skills, making it an attractive substitute for human beings.
According to the article “Artificial-Intelligence Search Engines Wrangle Academic Literature” published in the US journal Nature in August, ChatGPT can now automate many steps of academic writing, as it is among a new generation of search engines whose role is not only limited to searching for keywords and finding links to scientific literature but can also autonomously produce it.
Consensus works to provide answers backed by scientific research, for example, and Semantic Scholar ranks bibliographies, suggest new papers, and generate research abstracts.
According to the US news channel CNN, US billionaire Elon Musk has warned against the dangers of AI for general use among consumers, notably due to the emergence of advanced products issued by Google and Microsoft. He has joined a group of technology pioneers in signing an open letter that calls for stopping, for a period of six months, the development of AI applications, so that controls can be put in place to prevent it getting out of control.
Musk said that he supports government intervention to regulate AI, but also hinted at the possibility of its being too late to tighten control over the field. Even so, despite his anti-AI position, Musk’s investment activities are not devoid of the use of AI. For example, Tesla, a company owned by Musk, relies on AI and organises an annual AI day with the aim of promoting its work.
Musk is also a founding member of OpenAI, the company that released ChatGPT, and he has hinted that he plans to use AI to detect the manipulation of public opinion on Twitter (now called X).
According to CNN, Musk announced in July the creation of a company specialising in AI called xAI that will be a competitor to ChatGPT. Dozens of employees have been allocated to the new company, and a website has been launched. According to the site, Musk will lead the company, linking it to Twitter and Tesla with the goal of “understanding the nature of the universe”.
The announcement comes after a previous statement by Musk that he plans to launch a project called TruthGPT, through which he seeks to use AI to give only truthful results, especially after he criticised ChatGPT for not taking safeguards against releasing sexist responses to reader enquiries. It seems he is trying to take corrective steps in the field of AI, according to this vision of its future uses.
Reuters has reported that in order to keep pace with developments in AI, Neuralink, a company also owned by Musk, has developed a chip that can be implanted into the brains of paralysed people to restore their ability to walk and blind people to regain their sight. The chip works by processing and transmitting nerve signals from the brain to computers and mobile phones, allowing the patient to use these devices with his mind.
The company believes that its invention will be able to restore the nervous activity of the human body after injury, making those suffering from spinal cord injuries able to move their limbs again, for example. It also aspires to treat neurological conditions such as Alzheimer’s disease and dementia. Earlier, it faced a US federal investigation over possible animal-welfare violations in the context of complaints from its employees that included rushing certain tests on animals, causing deaths.
What is particularly frightening about such steps is that they are not only limited to their use in patients, but also extend to healthy people as well. This was highlighted by the US Washington Post newspaper in March, when it confirmed that chips are being developed to implant into the brains of healthy people to facilitate their communication with computers.
A number of companies are competing in the manufacture and development of such brain chips.
Despite the complaints directed at Neuralink regarding its treatment of animals, it has been able to obtain approval to conduct the first clinical trial of implanting an experimental brain chip in humans, raising concerns among a number of specialists.
According to the UK newspaper The Guardian, Laura Cabrera, a neuroethicist at Penn State University in the US, said that she was caught off guard by the decision to develop brain chips, given Musk’s erratic leadership of Twitter. This raises questions about the company’s ability to responsibly oversee the development of a medical device able to read brain signals, she said.
FEARS AND CONCERNS: Dolly Sarraf, a professor of sociology at the Lebanese University in Beirut, said that AI is a double-edged sword, as its presence in the wrong hands could be devastating, but its use for the benefit of humanity was also beneficial.
The fate of humanity depends on the goals it chooses and who creates and uses AI, she said.
In an interview with Al-Ahram Weekly, Sarraf said that one of the advantages of AI is its ability to improve productivity and reduce errors in industrial and service processes, as it can analyse data, identify problems that affect productivity, and provide immediate solutions, in addition to improving health and medical care through the early diagnosis of diseases.
AI can improve education and training by customising educational and training curricula according to the needs of students and trainees, she said.
However, the negative aspects of AI include job losses, possible privacy violations, deep falsification, and social and economic inequality, Sarraf said, stressing that there are risks of AI that could even lead to the extinction of humanity, according to a statement issued by the American Centre for AI Safety (CAIS) this year, in which 350 researchers and executives signed a statement indicating that the risks of AI could be equal to the risks of epidemics and nuclear war.
Sarraf said that two main factors could help to avoid the future dangers of AI, however. The first is greater awareness, since as long as this is not among the priorities of government and private institutions such as in education and the media we could be oblivious of the real dangers of this technology. The second is the need for legislation to regulate the use of AI, though this is difficult since even with the current legislation cybercrime has been impossible to eradicate, for example.
Regarding the position of the Middle East, Saraf said that the countries of the region were still “recipients” of technology and did not have a voice among those creating AI and other advanced technologies. What is required is greater awareness and the formation of initiatives to bring this about, since though it will doubtless prove impossible to stop the penetration of AI in our societies, we can at halt some of its risks, she noted.
Samar Khamlichi, a professor of political science at the Institute of African, Euro-Mediterranean, and Ibero-American Studies at the Mohammed V University in Rabat, said that we are currently living through a revolution in technology.
Stressing that the rapid development of AI has advantages and disadvantages, she said that the ability to automate complex tasks and improve operational efficiency in areas such as education, marketing, and cyber security was undoubtedly an advantage. AI can contribute to and lead innovation by encouraging innovators to create new applications and technologies. In the field of health and medicine, it can be used in medical diagnosis, drug discovery, and personal care, and in the field of cybersecurity, it may contribute to detecting security threats.
However, AI can also lead to a decrease in traditional jobs that do not depend on creativity, she said. In scientific research, AI systems can reflect biases, leading to unfair results. There are also concerns about privacy, especially in the security and political fields.
As for the dangers to human existence from AI, Khamlichi said that these were more theoretical concerns than anything else at present. The possibility of an existential threat is subject to interpretation and research. However, collaboration between AI researchers, policymakers, and ethicists is crucial, she said, as this would help to put in place transparency and accountability measures in mitigating AI-related risks.
Khamlichi said that AI could affect international relations by creating economic and strategic differences between countries. Issues around cyber warfare, autonomous weapons, and AI-based espionage could change the dynamics of conflict.
Regarding the Middle Eastern countries, she said that the Middle East, like any other region, could address the present rapid developments in AI by investing in education, research, and infrastructure. It could also develop policies promoting the responsible development of AI, addressing ethical concerns, and ensuring that the benefits of AI are real and widespread.
The writer is a researcher in political science and managing editor of the Middle Eastern Visions Platform of the European Centre for Middle East Studies in Germany.
* A version of this article appears in print in the 24 August, 2023 edition of Al-Ahram Weekly