AI for Good: Can the tech giant be tamed?

Yasmine Osama Farag , Wednesday 19 Jul 2023

Like any new technology, AI has emerged with advanced capabilities that amaze humans. However, along with its incredible potential, concerns have also arisen about the potential negative impact of this new technology on humanity and human civilization.

File Photo - The Engineered Arts Ameca humanoid robot with artificial intelligence as it is demonstrated during an event. AFP


The UN Secretary-General Antonio Guterres addressed the Security Council, recently emphasizing the potential of AI to accelerate human development while also cautioning against the malicious use of what is revolutionary new technology.

“Look no further than social media. Tools and platforms that were designed to enhance human connection are now used to undermine elections, spread conspiracy theories, and incite hatred and violence,” Guterres said.  “Malfunctioning AI systems are another huge area of concern. And the interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics, is deeply alarming.”

In response to the rapid development of AI and the need to establish clear guidelines for its use, the UN recently organized a conference to create a definitive blueprint for managing this technology and setting appropriate boundaries.

The “AI for Good Global Summit”, held in Geneva, brought together around 3,000 experts from companies like Microsoft and Amazon, as well as from universities and international organizations, to try to sculpt frameworks for handling AI.

Attendees were joined by dozens of robots, including several humanoids like Ai-Da, the first ultra-realistic robot artist; Ameca, the world’s most advanced life-like robot; the humanoid rock singer Desdemona; and Grace, the most advanced healthcare robot.

The robots affirmed in their first-ever press conference on the sidelines of the summit that they work alongside humans to assist them and have no intention of overthrowing or replacing them. However, they suggested they could be more efficient government leaders.

Warning of risks

While some AI proponents hail the technology for how it can transform society, including work, healthcare and creative pursuits, others are worried by its potential to exacerbate social inequalities and introduce biases on race, gender, politics, culture, religion or wealth.

“AI is a powerful and transformative technology that can bring many benefits to humanity and society, but it also poses some challenges and dangers that need to be addressed,” Mohamed Sami El-Tahawy, digital adviser at Microsoft Egypt, said.

El-Tahawy explained that AI could violate some people’s privacy, rights, or values. Its systems may collect, store, or use personal data or information without consent or transparency or discriminate or harm certain people based on their characteristics or preferences. Moreover, AI may cause errors, biases, and breaches that can affect the reliability and safety of the systems or their outcomes. For example, AI systems may malfunction or be hacked or manipulated by malicious actors.

Sam Altman, CEO of OpenAI, the company that developed the controversial consumer-facing AI application ChatGPT, has warned that the technology comes with real dangers as it reshapes society.

“I’m particularly worried that these models could be used for large-scale disinformation,” Altman told ABC News. “Now that they’re getting better at writing computer code, they could be used for offensive cyber-attacks.” But despite the dangers, he said, it could also be “the greatest technology humanity has yet developed.”

How does the world react?

In June, EU lawmakers pushed the bloc closer to passing one of the world's first regulating systems like OpenAI's ChatGPT chatbot. The law seeks to protect against so-called social scoring systems that can judge people based on their behaviour or appearance and applications that engage in subliminal manipulation of vulnerable people, including children.

Also, in the US, senators introduced two separate bipartisan AI bills amid growing interest in addressing technology issues.

US House Science Committee hosted a handful of AI companies to ask questions about the technology and the various risks and benefits it poses. Also, some House representatives proposed a National AI Commission to manage AI policy.

El-Tahawy suggests several practices that can help mitigate the risks associated with AI, including:

•   Developing and applying ethical, legal, and social principles, laws, and standards that ensure AI's responsible, safe, transparent, and fair use. These principles should reflect the values and interests of all stakeholders and respect all people's human dignity and rights.

•  Promoting education, training, and awareness about AI and its impacts on society and individuals.

• Encouraging cooperation and dialogue among stakeholders and AI experts, such as researchers, developers, policymakers, citizens, and non-governmental organizations. This involves fostering a culture of trust, openness, and accountability among all parties involved in AI, as well as engaging in multidisciplinary research and innovation that addresses AI's social and ethical challenges.

Short link: