INTERVIEW: 'Fear is a privilege': How the Global South is building the digital future - Payal Arora

Heba Abdelsattar in Sharjah , Wednesday 12 Nov 2025

Digital anthropologist Dr Payal Arora, Professor of Inclusive AI Cultures at Utrecht University, speaks to Ahram Online about how the Global South’s optimism toward technology and AI contrasts with Western digital pessimism, and why she believes hope, not fear, will shape the future of innovation.

§

 

In the gleaming conference rooms of Silicon Valley and Brussels, AI conversations hum with existential dread—job displacement, algorithmic tyranny, the death of truth.

Meanwhile, in Jakarta's internet cafés, rural Kenya's smartphone screens, and Dubai's migrant worker dormitories, an entirely different digital story unfolds, defined not by fear but by fierce, pragmatic hope.

Dr Payal Arora, shuttling between European policy boards and fieldwork across India, Brazil, and the Philippines, recognized this dangerous imbalance: the global digital conversation had become dominated by those with the most to lose and the least to gain from transformation.

Her groundbreaking books, The Next Billion Users: Digital Life Beyond the West and From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech, arrived as provocation and challenge—what if technology’s future is being written not in Palo Alto, but by Venezuelan refugees building online businesses, or lonely UAE workers finding AI companions to “download their day” to?

As a digital anthropologist, Arora studies what computer scientists ignore: not how technology works, but what it means to those who use it, from Bangladeshi sex workers to Filipino single mothers creating AI films while their children sleep, to Ecuadorian Indigenous artists retraining datasets against colonial imagery.

Now, as generative AI reshapes creativity, labour, and connection, her insights feel urgently necessary. In this wide-ranging conversation, she dismantles comfortable assumptions about digital divides and makes a radical case: optimism isn’t naïve, it’s strategic.

While the West debates what might be lost, billions are building what comes next. This is their story, and increasingly, it’s the one that matters most.

Dr Payal Arora is Professor of Inclusive AI Cultures at Utrecht University and Co-Founder of FemLab.

A digital anthropologist, author, and consultant, she has advised organizations like Google and Spotify, delivered 350+ talks across 67 countries, and holds degrees from Harvard and Columbia. Indian, Irish, and American, she lives in Amsterdam.

Ahram Online (AO): What inspired you to write your recent books, The Next Billion Users and From Pessimism to Promise?

Dr Payal Arora (PA): The inspiration came from a pattern I kept observing in international meetings and policy boards, especially within Western institutions. The tone was always the same: deep anxiety about technology.

Every new wave of innovation is met with predictions of disruption, job loss, and existential crisis. Yet when you step outside that bubble and talk to people in India, Kenya, or Brazil, you find an entirely different mood, hopeful, creative, future-oriented.

I realized we were missing a huge part of the global story. The West was becoming increasingly pessimistic about all things digital, while the Global South was embracing technology with optimism. My books try to make sense of that tension.

AO: You identify as a digital anthropologist. Could you tell us more about your work and what that means in practice?

PA: Of course. I’m not a computer scientist; I study people.

My work explores how humans use digital tools in everyday life, what meanings they attach to them, how they adapt them to local realities. I’ve spent years in India’s tier-two and tier-three towns, talking to young users; I’ve worked with refugees in Brazil and Namibia.

Technology doesn’t succeed because it’s well-designed, it succeeds when people make it meaningful. Even the most sophisticated system fails if users can’t see themselves in it. So anthropology helps us bridge that gap, it brings humanity back into technology.

AO: Why do you think this polarity between optimism and pessimism has emerged?

PA: It’s partly about mindset and lived experience.

In the West, there’s fatigue, economic, political, environmental. People feel they have much to lose. In contrast, much of the Global South has experienced visible transformation through technology, affordable data, access to education, entrepreneurship.

When your baseline is struggle, progress feels real and empowering.

For instance, data used to be prohibitively expensive in India. Today it’s among the cheapest in the world. That change has brought millions online, especially young, mobile-first users who are not passive consumers but active creators. For them, the digital world is not a threat, it’s an opportunity.

AO: You've mentioned that Western societies have grown more anxious about AI, while the rest of the world remains hopeful. Why do you think that is?

PA: Because fear is a privilege. It's a luxury to be pessimistic. In many Western societies, people are preoccupied with what might be lost. In the Global South, people are still focused on what might be gained.

When I talk to young people in China, India, or Kenya, I see energy, not despair. They are not debating whether AI will "replace" them—they are already figuring out how to use it for self-learning, for art, for business. This grounded, pragmatic optimism fuels innovation. If you look at the statistics, Americans and Europeans are much more sceptical and hesitant about using AI, and they're older populations. In contrast, youth in places like the UAE are just playing with it constantly. That experimentation is where real innovation happens, in the application phase, not just in building the technology.

AO: You've argued that we need to "burst the pessimism bubble." What does that mean in practical terms?

PA: It means rethinking what we value. Instead of focusing only on risks and harms, we need to look at possibilities. Fear-based policy leads to containment—it kills experimentation. You can't play with something you fear.

We need rational optimism, an optimism grounded in awareness of risks, but also in the courage to create. Because creativity thrives only in hope, not despair.

AO: Your books present many stories from the Global South. Could you share a few that stood out to you?

PA: Absolutely. One that moved me deeply was from Venezuela’s refugee communities on the Brazilian border. Instead of seeing themselves as victims, these individuals were building small online businesses, learning to game algorithms, and using AI tools to create content despite immense barriers.

In the Philippines, single mothers have started making AI-generated short films while their children sleep—turning creative dreams into reality without the need for expensive studios. And in Ecuador, Indigenous artists are using AI to challenge stereotypes of Indigenous identity by “training” datasets with new cultural imagery.

These stories show that AI is not just about automation, it’s about imagination.

AO: In one of your lectures, you spoke about how young people's first motivation online is often romance. Why do you consider that important?

PA: Because it’s deeply human! For many young people, especially in conservative societies, the internet is the only private space they have. It’s where they learn about love, intimacy, and self-expression.

When I interviewed youth across India, Nigeria, and Brazil, many told me their first reason for going online was to experience romance. And through that, they gained digital literacy, learning translation tools, communication, and confidence. We must not dismiss these experiences as trivial; they are gateways to empowerment.

AO: You often contrast the Global South's digital creativity with the West's growing pessimism toward technology. Given that most generative AI models are trained primarily on Western data, do you think we are witnessing a new form of algorithmic colonialism?

PA: The idea of algorithmic colonialism is certainly compelling, and I understand why it resonates. Big Tech—particularly in Silicon Valley—has amassed extraordinary wealth and power. We're talking about the perversity of trillionaires like Bezos or Elon Musk—and simultaneously, 30,000 jobs being cut at Amazon. These corporations rely heavily on data from the Global South because AI is very hungry for data, and as I mentioned, if user data is mainly in the Global South, they can't not use it.

Take Sam Altman's Worldcoin as an example, before OpenAI, he had this initiative supposedly as a nonprofit, meant to give money to everybody, but in return, you had to pay with your iris scans and biometrics. He went through Malaysia, the Philippines, and 26 other countries with this orb, collecting data. Then he declared bankruptcy.

Where did that data go? These are deeply unethical, problematic issues.

However, calling this colonialism outright oversimplifies the issue. If we look historically, colonialism involved complicity, local elites and foreign powers working together to exploit populations.

Think of the Maharajas in India or tribal clansmen in Africa, they worked with the East India Company and West India Company to oppress and extract from citizens.

It wasn't just "the West versus the Rest."

Similarly, today, there's often a partnership between multinational corporations and local governments or elites that perpetuates inequality.

Moreover, the power balance is shifting.

Competing models are emerging from China and elsewhere, often at a fraction of the cost. Within months of ChatGPT's launch, China changed the game, showing you can do it with a fraction of the cost, that you don't need infinite amounts of data. Open-source alternatives are challenging closed systems like OpenAI. Unlike OpenAI, many competitors have gone open source. This shows that OpenAI's business model may not survive compared to the others.

That gives me hope, we are witnessing progress and a diversification of power.

AO: Given this uneven playing field, how can countries in the Global South compete when their data is being used to train Western AI systems?

PA: We need to focus less on building AI and more on applying it effectively. The true value of AI lies in how it's used, not just who owns it. In the U.S. and parts of Europe, there's a lot of fear and scepticism around AI, which slows adoption. In contrast, young people in the UAE, India, and Indonesia are experimenting, learning, and innovating through play and adaptation. They're going to come up with ways to use it that are basic and benefit their sectors in ways the West hasn't imagined.

There's also an analogy I like from the fashion industry, it operates without copyright.

Designs are copied and remixed constantly, yet this openness drives creativity and innovation. The music industry tried to enforce copyright with endless lawsuits. Film piracy in the Global South remains the majority because a movie ticket costs half your salary in many countries, of course, there will be piracy when the business model is unworkable.

But the industry shifted its mentality.

They realized these people weren't going to be their customers anyway, it's a long tail of people who couldn't afford the movie theatre. But what piracy did was reveal market research. If you see 42 million downloads for a certain show, you can see audience patterns and what's popular before you even invest. It saves enormously on market research. So they realized: let's open it up.

Similarly, open-source AI allows shared growth. We've seen this evolution in Big Tech itself: from closed systems to more collaborative models. Even Microsoft, which used to be very risk-averse and litigious, realized it makes sense to do open code because you can work together at the frontier.

In some regions, Big Tech even functions as an enabler where governments fail to invest in infrastructure. For instance, when companies lay undersea cables across Africa, they're doing what many governments have neglected to do, it's extraordinarily expensive infrastructure. When you have governments committed to corruption, committed to non-change, committed to backward mentality, in comparison, Big Tech becomes more of an enabler than your own government.

So we have to recognize that people don't see it as colonial. They often see their own governments as holding them back more than Big Tech. Africa should have so much potential, but it's so expensive because governments aren't doing enough. If you ask the average African, they're happy Big Tech is there. Are they victims and ignorant? No, they just realize governments aren't going to do anything.

And here's another dimension: less than 14% of countries are liberal democracies. The only powerful contender that can hold state governments to account is Big Tech. If they say, "No, we're not going to remove this content just because you said so," they can do it because they have insane amounts of money. Just the five big tech companies are as rich as some of the richest countries in the world.

This creates tensions, whether in Russia or different parts of the world, it creates a counterforce.

So when young people and citizens of many countries are optimistic about Big Tech, it's not necessarily colonial, it's in contrast to the context within which they operate, which may be very restrictive governments, too much corruption, or very old-school, 70-plus geriatric leadership.

AO: That raises another question. As digital access expands, poverty today seems defined less by lack of access and more by vulnerability—to data exploitation, algorithmic exclusion, or exploitative digital labour. How do you view this shift?

PA: That's a very good observation, actually. The old idea of the "digital divide" is still a sticky concept—and of course, we still have ways to go because there are still people without access, even in developing countries. The digital divide hasn't disappeared completely, but things have changed a lot and will change faster than in the past.

But you're absolutely right in your framing: we should stop seeing this as the "haves" and "have-nots," the ones who have to catch up to those who have. We still have that thinking, that those without must catch up to those with.

Actually, it's more about who has a healthier internet, who has a safer internet, who has an internet that takes care of their mental and physical well-being. It's about the vulnerable cases, does this work for me? I don't have to be online all the time. If somebody is using the internet constantly, they could be an addict. Versus someone who uses it in a limited but enabling way, they could be much more fulfilled with limited usage than someone who is saturated.

By seeing this just on the access level, we're limiting our expectations of what technology should be. It's no longer about who has access, it's about what kind of access. Who has a safe, healthy, and empowering online experience? Someone constantly online might be more vulnerable than someone who engages meaningfully but selectively. It's not just a question of connection, but of well-being.

Technology should enable rather than exhaust us.

AO: You talk about the "paternalism" in how the West approaches technology in the Global South. Could you elaborate?

PA: Yes, there's still a deep-seated assumption that the Global South is a blank slate, that its people lack knowledge, literacy, or capacity. This leads to top-down "charitable" projects that ignore local creativity.

I see this in aid agencies, in big tech, even in well-meaning government programs. But people in the Global South are not passive. They're reimagining technologies to fit their own realities. The problem is not a lack of innovation, It's that the global imagination hasn't caught up.

AO: Western governments are implementing strong AI regulations. Do you think these frameworks genuinely protect the Global South, or do they risk imposing Western ethics while ignoring local realities?

PA: Regulations are often shaped by the contexts they emerge from, and Western frameworks don't always translate. The problem is that many governments in the Global South invest heavily in technology development but not enough in understanding how people—especially women and marginalized groups—actually use it.

It's something as boring as money, actually. Governments don't invest enough in what's happening on the user end. We're not doing large-scale surveys of how young people are using these tools or how we can improve them. They're putting everything into building the technology itself, but without considering why women may hesitate to use certain things.

For example, the rise of deepfake pornography and online misogyny means that many women choose not to be online at all. Their reluctance isn't irrational; it's a rational response to unsafe digital environments. So sometimes women may choose not to be online because of all this harmful misogynistic content.

We don't pay attention to how to secure different populations, particularly the most vulnerable. Women make up half the population, and yet they face extraordinary and disproportionate harms. If you intersect gender with caste, religion, race, or whatever it is in different contexts, the harms amplify multiple times.

Unless we build systems that work for the most vulnerable—women, minorities, lower-income groups—we will continue to reproduce exclusion. If you can build technologies that work for them, you're building technology that works for everybody, because it has to be very versatile and sensitive to a variety of harms, which doesn't just come from lazy thinking.

AO: How do you see AI transforming society, and what responsibilities do individuals and governments have to ensure fairness?

PA: First, we must understand that data is currency. The Global South holds vast data wealth, it should protect it like gold, not give it away cheaply under the illusion of "philanthropy."

Second, we need accountability. Much of the harm online—especially misogyny and hate speech—persists because platforms don't enforce consequences. Just as drivers and sellers have ratings, users should too. A healthier digital ecosystem requires fairness on all sides.

Finally, trust is everything. Lose public trust, and digital transformation collapses. It's that simple.

AO: You have noted leisure in this context. Has leisure itself become another site of extraction, where entertainment, microtransactions, and performative consumption deepen inequality?

PA: Leisure has always been a paradox. If you look at the history of leisure, the word itself basically means unstructured time and space. When we think about what that means, most of our lives are very structured. This is your family, this is school, this is where you go to work, where you sleep, where you eat.

When you have a small space, whether it's a park, an activity, or somewhere you can just do something naturally where there are not too many rules, it allows you to imagine what can be. Leisure could be very important because it could be very mundane, and we underestimate its power.

Through leisure comes play. They're very closely related. Play happens when you have leisure time, that means you've had time and energy at your disposal to think about and flirt with the rules of the game, which may reinvent the rules of the game.

Yet, in today's digital economy, leisure is also monetized. Every click, every moment of distraction is commodified. Still, we shouldn't underestimate leisure's potential.

Many transformative ideas—especially in digital art and culture—emerge from people using their free time to experiment. So while the system may exploit our leisure, it also provides the conditions for new cultural forms to arise.

AO: What kinds of AI-related anxieties or aspirations have you observed among young people in the Global South?

PA: One of the most obvious rises is AI companionship—and this exists on a full spectrum. From just feeling not alone, like having a friend, to maybe someone who loves you.

Think about this: a lot of the Middle East has a disproportionate number of men compared to women. The UAE is a classic case. These migrants are lonely. They share a room or apartment with a couple of other men, then they go to work and come home.

When they go online, they can chat with someone. Sometimes it feels like you're being listened to because you don't have someone to download your day to.

If you're in a relationship, you get to download your day, tell somebody something to make it feel like today mattered to somebody. The act of expressing yourself can feel relieving. So it's therapeutic on multiple levels. It's not surprising that the number one use for DeepSeek very quickly became therapy.

Especially in conservative societies, the more conservative the culture, the more restrictive the norms around expression. It's not that people are doing radical things, they want to express themselves.

How can I become who I am if I'm confined to this? They want to discover who they are, and if they choose to come back to traditional norms, that's their choice because they've become the person who made that choice.

So this whole notion of intimacy, the notion of companionship, that is one of the strongest trends.

It's not necessarily about utility. In conservative societies, these tools offer spaces for self-expression that might otherwise be forbidden. They allow people to explore identity and emotion safely. So while the West often sees AI as a threat to human connection, in these contexts, it can actually restore it.

AO: Some critics argue that digital culture encourages individualism over community. Are we losing social or cultural capital by enforcing this kind of "digital individualism"?

PA: Actually, we shouldn't assume it's digital individualism, because it's not. Even seemingly solitary technologies are profoundly social. Gaming communities, for example, are very social in character. When you engage with ChatGPT, oftentimes it's embedded in social situations.

This has happened with mobile phones and television, too. You and I watch Netflix, that might seem like an individual act. But what's social is you and I discussing it afterwards. Tools like ChatGPT or Gemini are also used socially, people use them to mediate family conflicts, to seek advice about relationships, even to negotiate with partners or parents. It has to come back to us investing in our relationships.

In the end, we know that when we are socially connected, we feel better. Technologies can facilitate that. What's interesting in my research is that I found ChatGPT and tools like Gemini AI were often used to help resolve marital conflicts, you don't have couple counsellors, and you can't get your husband to attend them anyway.

So you ask the AI: "How do I handle this conflict with my parents?" and then it impacts your actual relationship.

So individual use often leads back to social connection. The intent matters, is my intent to go on ChatGPT social or individualistic? And individualistic is not a dirty word, especially in very socially oriented cultures, because that means you need breathing space.

In collectivist cultures, a bit of individual space is not selfish, it's breathing space. It allows reflection, which ultimately strengthens relationships.

AO: Given the growing rivalry between the U.S. and China, are we entering a new era of "digital non-alignment"?

PA: Yes, we are witnessing new geopolitical alignments around data and AI.

Take BRICS, for instance. Ten years ago, it was merely an investment acronym, Wall Street didn't want to use "emerging markets", which sounded old school, so they called it BRICS so people could invest in this cluster.

Today, BRICS means something else entirely. They're doing data sharing, open source development, and technological collaboration. They've actually created real geopolitical alliances.

Ironically, global tensions—especially during the Trump era—pushed many countries in the Global South to unite and cooperate more closely. Trump is actually the best thing that's happened to the Global South in some ways.

We've temporarily put aside our differences to come together because we're all facing similar challenges. What began as a reaction to Western dominance has become an assertion of digital sovereignty. This geopolitical shift has resulted in real data sharing and connection.

AO: What kind of future do you envision, what would a truly balanced digital society look like?

PA: A society that values human relationships as much as technological progress. We pay teachers, caregivers, and nurses the least, even though they do the most irreplaceable work. AI should help us revalue these roles, not undermine them.

The ultimate measure of technological success is not efficiency, it's empathy. If technology can enhance our capacity to connect, care, and imagine together, then we're moving in the right direction.

AO: Finally, where does digital hope arise today?

PA: Digital hope arises with every young person who decides to use it because they are determined that they are not going to be punished for being born in conditions and contexts that they did not choose.

Think of a student in rural Kenya whose teacher doesn't show up, whose parents can't read and write, but who still logs onto ChatGPT to learn. They are bursting with curiosity. These are the prototypes, people who refuse to be limited by circumstance. They're not waiting for permission or infrastructure. They are using what's available to imagine and build new possibilities.

These young people aren't going to be held back because they were born into unfortunate circumstances. These are the people we need to look at to understand why technology is meaningful for our future. They remind us that optimism is not naïve, it's a form of resistance, a choice to participate in the future despite the odds. That's where digital hope begins.

AO: If there's one takeaway you'd like readers to remember from your books, what would it be?

PA: That optimism is not naïve, it's strategic. The majority of the world doesn't have the luxury to be pessimistic. They are too busy building the future.

We must learn from that spirit. Rational optimism, rooted in reality, but open to possibility, is what will sustain us in the digital age.

Short link: