INTERVIEW: Renowned expert Jürgen Schmidhuber refutes AI future threats

Mohamed El-Kazaz, Tuesday 26 Sep 2023

Professor of AI Jürgen Schmidhuber speaks about the benefits of AI rise, refuting the assumed fears of its future threats.



Professor Schmidhuber earned his doctorate in computer science from the Technical University of Munich (TUM). He is a co-founder and the chief scientist of NNAISENSE company, scientific director at the Swiss AI Lab (IDSIA), and professor of artificial intelligence (AI) at the University of Lugano.

Known as ‘father of AI’, Schmidhuber is also a recipient of numerous awards, the author of over 350 peer-reviewed papers, a frequent keynote speaker, and an adviser to various governments on AI strategies.

His lab’s Deep Learning Neural Networks have revolutionized machine learning and AI. By the mid-2010s, they were implemented on over 3 billion devices and used billions of times per day by customers of the world’s most valuable public companies’ products, eg, greatly improved speech recognition on all Android phones, machine translation through Google Translate and Facebook (over 4 billion translations per day), Apple’s Siri and Quicktype on all iPhones, the answers of Amazon’s Alexa, and numerous other applications.

In 2011, his team was the first to win official computer vision contests through deep neural nets with superhuman performance. In 2012, they had the first deep neural network to win a medical imaging contest (on cancer detection), attracting enormous interest from the industry.

His research group also established the fields of artificial curiosity through generative adversarial neural networks, linear transformers and networks that learn to program other networks (since 1991), and mathematically rigorous universal AI and recursive self-improvement in meta-learning machines that learn to learn (since 1987).

He is currently working as the director of the Artificial Intelligence Initiative and a professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) Division at King Abdullah University of Science and Technology (KAUST). 

Mohamed El-Kazaz​: Everyone is warning of the dangers of AI rise, followers are increasingly worried, and ordinary people are fearfully anticipating the demise of their jobs and even of all humanity soon. But you think AI rise should not be feared, why?

Jürgen Schmidhuber: Because 95 percent of AI research is really about making human lives longer, healthier, and easier. There is tremendous commercial pressure to create “good AI,” because companies want to sell you their AI products, and you’ll buy only what you think is good for you. Our own motto has been "AI∀" or "AI For All.” Humanity is already greatly profiting from the AI we have developed over the decades.

MK: But won't AI actually cause jobs to disappear? In this case, what can we do?

JS: We'll do what we always did when jobs disappeared: create new jobs. 200 years ago, most people in the West worked in agriculture. Today, only 1-2 percent or so. But on average, everybody is much wealthier than back then. Similarly, decades ago, industrial robots replaced jobs in car factories. Nevertheless, in countries with many robots per capita (eg, Japan, Germany, and Korea), unemployment rates are low. Why? Because lots of new jobs were invented.

However, my old statement from the 1980s is still valid: it’s easy to predict which jobs will disappear, but hard to predict which new jobs will be created. 40 years ago, before the WWW was developed at the Large Hadron Collider in Switzerland, who would have predicted all those people making money as influencers and YouTube video bloggers?

MK: Why do you say that AI does not introduce a new quality of existential threat?

JS: Maybe 5 percent of AI research is about AI weapons, eg, unmanned drones. But we should be much more worried about the 60-year-old technology of rockets with hydrogen bombs that can wipe out civilization within 2 hours, without any AI. That’s currently a much bigger existential threat.

MK: But the propaganda about AI weapons has made people fear that their lives will disappear? Have you ever received such fears from people?

JS: Most people I know aren't afraid of AI at all. Sure, everybody knows that all technology can be used for both good and bad. But why should you be more afraid of current AI weapons than of standard guns? A criminal may use an AI drone with face recognition to attack a victim without harming bystanders, instead of shooting the victim directly with a traditional gun.

In both cases, existing laws dictate that he'll go to jail if caught. Let me repeat: AI-driven drones are much less worrisome than a 60-year-old H-bomb that can obliterate a large city with 10 million inhabitants within a few seconds, without any AI.

MK: Do you think there is still a fierce debate in AI circles about whether the technology has made genuine progress, or whether it is mostly hype?

JS: Nobody anymore thinks it's hype. Everybody agrees that the ongoing acceleration of computer hardware has brought genuine progress, although the basic AI techniques were invented already in the previous millennium.

MK: What are the key points that non-experts need to understand about how AI works?

JS: Modern AI is based on artificial neural networks (NNs) inspired by the human brain. The brain has on the order of 100 billion neurons, each connected to 10,000 other neurons on average. Some are input neurons that feed the rest with data (sound, vision, tactile, pain, and hunger). Others are output neurons that control muscles.

Most neurons are hidden in between, where thinking takes place. Your brain apparently learns by changing the strengths or weights of the connections, which determine how strongly neurons influence each other, and which seem to encode all your lifelong experience. Same for our NNs, which learn better than previous methods to recognize speech, handwriting, or video, drive cars, etc.

MK: Are you optimistic that we can create a great future with AI?

JS: Yes.

MK: How does that happen?

JS: It has already started to happen. In 2012, when Compute (computational resources required for artificial intelligence systems to perform tasks) was 100 times more expensive than today, my team had the first deep NN to win a medical imaging contest. This was about cancer detection. Today the same technique is used for thousands of medical applications. Also, by the mid-2010s, our LSTM NNs were on billions of smartphones (probably yours, too), for greatly improved speech recognition and automatic translation, breaking down old communication barriers between people and nations.

Today, our NN techniques help to power ChatGPT and similar models that facilitate human lives. And we are seeing only the beginning: every 5 years, Compute is getting 10 times cheaper, that is, in 30 years, people will smile at today's applications, which will seem primitive compared to what will be available then.

MK: Regarding AI governance, do we need to legalize a framework for ensuring AI and machine learning technologies are researched and developed with the goal of helping humanity navigate the adoption and use of these systems in ethical and responsible ways?

JS: This may sound good at first glance. But what exactly is meant by "humanity"? As I have pointed out many times in the past, there is no "we" that everyone can identify with.

Ask 10 different people, and you will hear 10 different opinions about what is "good." Some of those opinions will be completely incompatible with each other. For example, different governments have different opinions about what is good for them and for others. Nation A will say, that if we don't do this type of AI research, then nation B will do, perhaps secretly, and gain an advantage over us. The same is true for nations C, D, and E.

Short link: