Social media age bans will fail because they are not radical enough
Social media age bans are a half-measure. Only by banning toxic business models that seek to exploit and steer social attention can we create a healthier digital space.
Two decades ago, societies made a huge mistake. They delegated to businesses the design of the main spaces where people are social: we call these “social media”. Only now are the full costs of that historic error being grasped.
It took a decade for the problems with social media to emerge, as the business models of commercial platforms matured and converged. It took a further decade to uncover the full scale of the problem, through multiple scandals and alarming research, particularly on the psychic costs from social media to young people. Important regulatory action was taken in some jurisdictions, but the problems didn’t stop.
Social media age bans
By 2024, two decades after Facebook was founded, calls for drastic action, in particular age bans on social media for children and younger teenagers, began to surface in multiple countries. Early in 2026, no European government can afford to ignore the issue.
But age-related bans on social media are unlikely to work. The reason is not that radical action against the risks to social life the media we call “social” cause is not needed. The reason is that age-related bans on social media don’t address the core problem head on. It is as if societies were looking at themselves in the mirror, realising their sickness, but being distracted by one important symptom, and ignoring the wider disease.
What then is the root problem with today’s social media? It is platform business models that seek to exploit and steer social attention: models that leverage data about platform users as the means to generate advertising income, either directly or by more complex means.
To point to those business models is of course not new. What’s needed, and so far missing, is to admit a difficult truth: that only by banning those business models, as an exploitative cancer in everyone’s social life, will we begin to address the problems with social media that now seem unavoidable.
There are no circumstances in which we can allow those business models to continue and expect a healthy social domain for children or adults. The question is what we do about this, once we confront the ugly truth in the mirror.
Tackling the root cause
Step one is to realise that all regulatory measures proposed so far, including age-related bans, are half-measures. Let’s start with age-related bans, because they are highly topical right now. They will fail for at least three reasons.
First, while they seem radical, they address only half the problem: for toxic social media business models drive polarisation for adults too, poison their politics and undermine their sources of trusted facts.
Second, age-related bans work by punishing young people in a discriminating way, risking alienating many young people who absolutely want the problem of toxic platforms to be confronted.
Third, such bans target specific platforms only, leaving space for obvious work-arounds, even if a genuinely secure age verification process could be achieved. For all those reasons, age-related bans are a costly distraction from the core issues.
Meanwhile earlier regulatory moves – for example, the increasingly tough enforcement of Europe’s GDPR to restrict trading in users’ data without consent, the Digital Services Act and its regulation of very large platforms, and the UK’s Online Safety Act’s rules to ban some pornographic and self-harm-promoting content – are throwing sand in the engine of commercial social media. But they have not, and never will, resolve the wider problems caused by commercial social media.
Why not? Because they chase specific symptoms rather than tackling the root case: the continued operation of business models whose basic drive is to exploit attention by shaping which content flows fastest across platforms. Unless we ban those business models, new platforms will keep emerging, and regulators will always be playing catch-up.
Reluctance to change
Step two is to acknowledge directly the reasons why, as societies, we find it so hard to confront head on the problem of toxic social media business models. One reason is governments’ fear of reprisals from businesses that have more users than they have citizens.
Another is the fact that almost all of us are users of these platforms, and get some benefits from them, for the simple reason that social connection is a good thing in principle, and still can be good at an individual level, in spite of increasingly clear societal risks. So the attempted shift to a healthier social media system is going to be difficult, even if it can no longer be postponed.
Rebuilding social media
Step three is to realise that, because all of us have a stake in social media, solutions cannot be just about regulatory bans, essential though they are. They must also be about social reconstruction: reconstructing and rebuilding the space of social media for the better. Which means not just regulatory restrictions but governments generating funds that, at arm’s length, can support those not-for-profit social media that already exist, or at least those not based on the standard toxic business models.
At the moment those alternative social media – the platforms like Mastodon on the so-called Fediverse – have no chance of thriving. They lack funds (crowdsourcing will never be enough), and they lack user numbers, because we are not free to transfer our contacts and data across to those platforms from the commercial platforms that have captured us.
As Cory Doctorow argued a few years back, regulators could change that overnight by forcing all social media platforms to make users’ contacts, conversations and uploaded media transferable.
A better space of the world
Step four is to acknowledge openly that if societies do decide to take the problem of social media head on, there will be challenges. Yes, there will be casualties amongst those who have worked hard to live within the toxic social media economy: radical reform won’t be good, at least initially, for influencers.
Yes, this transformation will require some idealism in politics at a time when this appears to be in short supply. But young people are crying out for idealism here. Consider the Ctrl+Alt+Reclaim movement across Europe and recall that two-thirds of UK 16-24 year-olds said in a survey last yearthat social media did more harm than good.
And no: a serious attempt to redress humanity’s catastrophic error in how it built social media cannot be the work of one country alone – in part, because it’s a global problem, so demands multinational solutions, but above all because the source of the problem lies in two very powerful countries, the United States and China.
Since China, historically, has shown more willingness to regulate its digital platforms seriously than the United States, and its Chinese-language platforms have limited global coverage, the main difficulty lies in confronting US Big Tech. Could social media be the wicked problem on which what Canadian PM Mark Carney recently called the “middle powers” finally coalesce into action? Quite possibly.
The scale of the challenge and the potential opportunity could not be clearer. Healthier spaces of social connection – or, as I recently put it, a better “space of the world” – will benefit our societies, our politics and our young people long after today’s politicians are forgotten. But if they fail to take the chance to build those healthier spaces, today’s politicians may never be forgiven.
When the hypernudge becomes the rule in platform advertising
LSE’s Professor Nick Couldry uses the framework of data colonialism to reflect on Meta CEO Mark Zuckerberg’s claim that his company can replace advertising agencies using AI-driven processes.
LSE’s Professor Nick Couldry uses the framework of data colonialism to reflect on Meta CEO Mark Zuckerberg’s claim that his company can replace advertising agencies using AI-driven processes.
Very big changes are underway in the world of advertising. Although I am not an advertising specialist, this blog will reflect a little from a broader civic perspective on Mark Zuckerberg’s recent claim that Meta will in future replace the ‘creative’ function of advertising agencies, even their targeting and measurement function, replacing it supposedly with an AI-driven automated process of designing, testing, and measuring the effectiveness of ads delivered direct by Meta to companies that simply want to promote their products. In the interview published by Stratechery, Zuckerberg billed Meta as ‘the ultimate business agent’ – his interviewer Ben Thompson quipped, ‘The Best Black Box of all time’, but it seems without irony.
Zuckerberg’s claim is highly contested, particularly in the advertising industry, and we’ll see over time how it works out in practice. Meanwhile, it’s useful to examine that claim in the context of much longer social transformations. I will offer a provocative way of reading what’s been going on, but I hope also a useful one: the data colonialism framework that I’ve developed with Mexican-US writer Ulises Mejias.
The hypernudge and data extraction
We’re certainly not the first theorists to sense that something really big has been going on in market societies over the past two decades, a matter not just of the technical details of advertising markets, but a change in the very space where advertising and many other things are possible. Legal scholar Karen Yeung caused a stir a decade ago with an article on what she called the ‘hypernudge’. The concept of a nudge needs no explanation, but Yeung asks: what if an individual’s behaviour is nudged at every point in their digital journey, based not on static data, but dynamically evolving data, informed by that individual’s actions and interactions? Then a real question arises about how we view the choices of that individual: are they still in any sense authentic, free choices, or are they more like the motion of a ball in a pinball machine? Or, as Yeung put it, could we ‘be slowly but surely eroding our capacity for authentic processes of self-creation and development’ that is, our capacity to be the free, choice-making citizens that liberalism has always assumed we are?
Another legal theorist, Julie Cohen, went further in her 2019 book Between Truth and Power, writing about how continuous data extraction by advertisers, platforms and countless others is changing the scope of where profit is generated in society: it’s changing the gearing of the whole capitalist machine. Cohen suggests that ‘the overriding goal of data refineries and data markets is not understanding [of consumer behaviour] but rather predictability in pursuit of profits’ (p 71, added emphasis). These data-driven operations, so standard in today’s markets, according to Cohen ‘work to maintain and stabilize the overall pool of consumer surplus so that it may be more reliably identified and easily extracted’. In other words, the very idea of communicating with people as a tool for making profits is put into question, as if, from the point of view of business function, it had become beside the point. The implications, Cohen suggests, are deep: ‘for individuals and communities, the change in status from users and consumers to resources is foundational’. Meanwhile the whole public realm comes to ‘subordinate considerations of human well-being and human self-determination to the priorities and values of powerful economic actors’ (p. 73), that is, giant data harvesters such as Meta.
These quotations, though a little abstract, get the measure of the transformation that underlies Meta’s attempt to literally take over the space of the creative advertising industries. This is more than a technological adjustment. And, for all its immediate shock value, it is anything but surprising from a longer historical perspective. I know that some respected commentators, including John Gapper of the Financial Times, are sceptical about whether Meta’s moves really mean the death of advertising creatives, but others are less sanguine. And I think this second group is right to sense that something very big is under way. So let me stay with that concern and offer a way of making sense of what Meta has been doing as more than economic restructuring: in fact, as a historically significant shift in the basic building blocks of society. This is the transformation that Ulises Mejias and I call data colonialism.
Rather than seeking to convince you of the virtues of data colonialism as a theoretical framework, I want to see what we might gain by thinking about the dramatic potential changes in the advertising industry through a colonial lens, or rather a decolonial lens, because the point would be not to accept it, but to start seriously to resist it.
Making Sense of Data Colonialism
Ulises and I like to explain data colonialism through the four levels of a strategy video game.Colonization, the video game, was Game IV in Sid Meier’s Civilisation series, launched in 1994, and relaunched in 2008. You can play the Spanish, the Portuguese, the Dutch and of course the English version. The goal is to win, which means moving through four levels:Explore, Expand, Exploit, and Exterminate. These four levels, melodramatic as they might first seem, are actually a pretty good picture of how practices of data extraction and data processing – and increasingly AI – have been unfolding across contemporary societies, and changing them. Taking each in turn, let’s consider how these four levels might help us make sense of how it came to seem natural to Mark Zuckerberg to say what he did about advertising creatives.
Explore
Let’s start with Explore. It’s really not a stretch to say that Facebook and Meta, and countless other platforms, have for two decades been exploring the expanse of social life, converting it into a domain which – in terms of the extraction of data – has very much come under their control.
Expand
Similarly with Expand: ever more of our time online, even when we aren’t on Meta platforms, is spent in spaces which automatically export data to Facebook, to build up its picture of us.
Exploit
What about Exploit? It is clearly not enough to have access to multiple domains from where you can extract data: the point for Meta of course is to profit from this. Basically, this involves two steps. First, to acquire our data at the lowest possible price, that is, for nothing. So today, instead of the cheap (ie virtually free) land that was historical colonies, Meta and countless other platforms benefit from cheap (ie virtually free) data about each of us. This doesn’t happen because we freely and consciously choose to give it up. It happens because digital platforms are designed so that it is impossible to act on them without data being extracted and stored about us. Every platform, in this sense, is what Ulises and I call a data territory. More than that, it is a territory where data can seemingly be taken pretty much for free – the only cost being the minimal one of complying with regulations for obtaining nominal consent to the data territory’s operations.
In historical colonialism, there was a legal fiction, developed to its most sophisticated form by the English, called terra nullius, no one’s land: Australia was called a terra nullius, because no one (no one important) was assumed to be there, so it could be taken for nothing and without obstruction, in the eyes of the law. Now the domains of our lives are treated as if they were data nullius – to all intents and purposes, business models operate as if that data can just be taken from us and used for the benefits of the business: the restrictions are minimal. Which means – and here’s the rub – that, once Meta had developed some of the most advanced AI tools to analyse its huge data harvest, it no longer made sense for Meta to do anything other than use that AI and that data to generate the product from which it has always wanted to profit, which was advertising.
Exterminate (or Eliminate)
And this is where the comparison gets most interesting. You’ll recall the final level of the colonization game was Exterminate. Ulises and I don’t for one moment suggest that our digital lives, our relations with platforms, or indeed advertisers’ relations with Meta, involve the hideous violence that characterised historical colonialism. Of course not. But what if we replace the emotive term ‘exterminate’, with the less emotive description: ‘eliminate previously effective and well-functioning ways of life’? That too was the consequence of historical colonialism, alongside the physical violence, making unliveable what were once perfectly well-integrated and viable forms of life. The logic of colonialism required them to be overridden by different logics, different rationalities, different ways of doing things, so that profit and resources could be extracted from them without obstruction or interference. That has been the history of colonialism.
Zuckerberg’s new vision for the ad industry
But this puts in a very different light the arguments of legal theorists Karen Yeung and Julie Cohen that the space of market society is being changed fundamentally. Zuckerberg’s vision for Meta today is about exploiting its data territories (the platforms) as directly and seamlessly as possible to generate profit: the goal is not understanding consumers’ behaviour, but extracting profit from the predictability of consumer action. It is not about using AI power to present sovereign consumers with the chance to enjoy creative messages that express their culture and their society. In fact, Zuckerberg’s vision is not about communication at all. It is about extraction: profit extraction through a continuous machine-fitting of automated product to automatically tracked response. This is indeed, recalling the phrase from Zuckerberg’s May interview, ‘the best black box of all time’.
The historic idea that there is space in market societies for communications that creatively try to persuade consumers to make choices about products, brands, and the like belongs to a very different idea of markets and the public world from that of the data colonialists, if you allow the term. For data colonialists, data is the medium, AI the ultimate seamless tool, and creative thinking (by humans about their fellow humans) is beside the point. No wonder Zuckerberg felt he could say at a conference in May that ‘Meta’s tools can find interested users better than human marketers can’.
The end of human creativity in advertising?
For data colonialists, there is nothing at all strange about overwriting the creativity of advertising agencies or anyone else with ‘machinic processes’. The replacement of advertising creatives may not be total, but, I fear, it will be progressive and accelerating – unless its root causes are confronted and resisted. What is going on here in advertising is not fundamentally different from what is under way in music or education, where battles are under way today to preserve the idea of music as more than just fuel for large AI models, or to preserve the space of the classroom as a place where dialogue between teacher and pupil can still happen (rather than just automated tracking and evaluation). Under way, through AI, across societies, is a redefinition of cognitive production – the spaces of ideas and creativity – with long-term consequences we can barely yet formulate. I’m not reassured when Stephen Pretorious, WPP’s AI lead, says, that ‘creativity, in its purest form, remains a human skill’. Exactly, but that does not mean that this purest form is going to thrive in, or even survive, the AI transition.
For a long time, it seemed as if Big Tech platforms were just part of a wider ecology alongside traditional ad agencies and other specialist companies – a ‘frenemy’ perhaps, as WPP founder Martin Sorrell once famously put it, but still for some purposes recognisably a friend – but increasingly this will cease to be the case. Yes, the largest ad agencies, like WPP, will seek to tame this beast, to live and sleep with it, but that will not make the idea of an automated advertising delivery platform any more compatible with the historically important idea, which we risk losing, that markets are places where creative communications go on.
But, you might respond, AI’s celebrators in advertising say that their goal is to releaseto overworked executives the time to engage in advertising’s truly creative processes. That is, overall, to conserve existing ways of life while exploiting the resources of the earth more rationally. But that is what the colonizers have always said.
The battle to rebuild our social media has started
Social media platforms harm mental health and democracy. Nick Couldry argues it’s time to redesign our digital spaces beyond profit-driven models.
The harms of social media platforms – from the mental health of young people to their role in creating rabbit holes for isolated adults who went on to commit atrocities – are well known. But to move forward, writes Nick Couldry, we must address the root cause of these problems: the decision to delegate the design our social spaces, at the most basic level, to profit-seeking businesses.
On 7 January 2025, Mark Zuckerberg caused widespread dismay when he announced that Meta would no longer fund fact-checking of the content that flows across its platforms, at least in the US. Content rules which had constrained offensive speech on Meta platforms were also summarily withdrawn: the Center for Countering Digital Hate estimates that almost all restraints on bad content on Meta sites will disappear.
The close alignment between Big Tech bosses and the new Trump administration has been widely noted. In a move whose politics are hard to miss, Mark Zuckerberg implied in a blog that he had in Trump a potential ally in the fight against overseas regulators that try to hold back US tech companies – an obvious reference to the EU’s regulatory authorities’ increasing activism against Meta (and X). Sam Altman of Open AI called directly for Trump’s support to remove copyright restrictions in his speech to the Paris AI summit.
The battle to redefine Big Tech’s relations to regulation and the state is “on”, heralding the start of a new era of oligarchic power. Meanwhile, for those who don’t accept super-rich entrepreneurs’ power over social media platforms, the battle is also engaged. In early January the federated comment platform Mastodon announced it was strengthening its organisational structure and raising funds to expand its operations and a group of well-known tech figures and celebrities launched the “Free Our Feeds” campaign to help fund Bluesky which, like Mastodon, is an open-source platform that is challenging X.
But what exactly is the battle about social media under way here? Why is it needed?
It is not about abandoning social media entirely. But the wider social problems that flow from the highly concentrated corporate infrastructure that supports social media have become increasingly obvious: the evidence that the mental health of young people, especially young girls, has been negatively affected social media; the links between social media content and child suicides; the role that social media have in creating rabbit holes for isolated adults who went on to commit atrocities.
In my recent book – The Space of the World: Can Human Solidarity Survive Social Media and What if it Can’t? – I ask what lies behind these seemingly disparate scandals and worries, and the corporate structure that underpins them.
The dangers of letting profit-seeking companies design our social spaces”
The issue is not the smartphone as such. I don’t follow those like Jonathan Haidt who want to restrict use of the smartphone. Smartphones have innocent and important uses: finding our route from A to B, knowing when the next bus is coming, looking up something we can’t remember on the move.
At the root of our problems with social media is instead a deeper but less noticed mistake: that we delegated to profit-seeking businesses what I call “the space of the world”. That is, the spaces where, for much of our time, we carry on our social life: indeed the hyper-space of (almost) all possible spaces where we can be social.
At the root of the problem is a deeper but less noticed mistake: that we delegated to profit-seeking businesses “the space of the world” – the spaces where, for much of our time, we carry on our social life
The space of the world is about much more than technology or code: because we spend so much of our time on social media platforms, their space literally is the space in which we live. The Netflix drama Adolescence illustrates this with great vividness.
Until recent decades, the space of the world emerged slowly in response to emerging technologies, but no one ever designed or sought to manage it (kings and emperors would have loved to have that power, but it didn’t exist). But that has changed. The possibility of actually designing the space of the world only emerged from the opening of the internet and world wide web three decades ago, when every point in space became connected to every other point. This created the risk that bad content from somewhere – from anywhere – might spread more widely. So, what did we do about that risk? We designed platforms that, instead of mitigating the risks of online bad content that were already apparent, worked to amplify that risk – for profit.
Over two decades, we have allowed the wrong space of the world to be built: one driven by business models that rely on tracking everything we do and from pushing us content that generates “engagement”. What that really means is we are served content which grabs our attention and it is this – our attention in the social media space – which is sold, for profit, to advertisers and other actors willing to pay for it. In short, we have delegated to businesses the profitable exploitation of the social air we breathe.
The Space of the World tries to put into larger perspective the many things going wrong with social media, as well as the good things, and how we might address the problems in a far-reaching, not piecemeal, way.
Why building solidarity is so hard in the current system
One example concerns the evolution of commercial platforms into polarisation machines that, for profit, exploit human beings’ inherent tendencies to form in-groups and out-groups (the mindset of “us vs them”) – tendencies which social psychology from as far back as half a century ago suggested we should be wary of.
Yes, in the short-term, solidarity can be built online, because of social media’s ability to mobilise us quickly. But, for the longer-term, polarising social media platforms work to undermine all forms of authority, and on a global scale. No political theorist ever thought a global scale compatible with civil politics – most national scales have enough problems. The multiple problems with our politics today, while they have additional causes, therefore followed almost inevitably with the advent of the social media platforms that have been built over the past two decades.
The result is that a commercially shaped space of the world makes solidarity ever harder to build. Solidarity means finding a common stake in acting together with people who are not like us. Meanwhile, the problems humanity faces (solving which needs more, not less, solidarity) get ever more terrifying, above all, the climate emergency and rising global inequality. Yet we are condemned to spend most of our social time on platforms that exploit that time and steer it towards whatever noisy engagement generates profit. We have toxified our common life and our politics.
That is why today we must work to try and reverse these developments. In the book’s final chapter, I consider our options. The point is not to abolish social media – as if we could unlearn all the past two decades have taught us! – but to rebuild our platforms as if the “social” in social media mattered.
The point is not to abolish social media – as if we could unlearn all the past two decades have taught us – but to rebuild our platforms as if the “social” in social media mattered.
Practical models already exist. Federated social media, such as Mastodon and Pixelfed, don’t exist to extract data or profit and are built to a smaller scale that can be managed by communities. Bluesky is built on an open-source technical protocol that would allow future federation.
True, federated social media haven’t yet taken off on a scale to rival commercial social media. But there’s a very clear reason for that. Right now, we don’t have a market in social media, just monopolies: when you want to leave a platform, you can’t take your contacts or your conversations and images with you! Why not, as Cory Doctorow and others have suggested, force big platforms to allow the transferability of our contacts and our histories? It’s also down to us. Social media are part of our everyday habits, so it’s not easy or convenient to imagine changing them. But if we work together, we stand a much better chance, and the foundations have already been laid. Do nothing, and we risk staying within the trap that commercial social media has laid for us, condemned to relive the dangers to our collective social and political life that they pose. In The Space of the World, I look at ways to spring that trap and help us rebuild a better social and political world together.
Data colonialism comes home to the US: Resistance must too
LSE’s Professor Nick Couldry and SUNY Oswego’s Professor Ulises A. Mejias explainhow developments in the US government can be seen through the lens of data colonialism, and what can be done to resist.
LSE’s Professor Nick Couldry and SUNY Oswego’s Professor Ulises A. Mejias explain how developments in the US government can be seen through the lens of data colonialism, and what can be done to resist.
Elon Musk’s radical intervention in the US government through the Department of Government Efficiency (DOGE) has been called an “AI coup,” a “national heist,” and a “power grab.” Various experts are concerned that it is unconstitutional. But beyond its legal ramifications, the parts of it that involve getting access to government data fit well within the playbook of what we call data colonialism.
It is only through the lens of colonialism that we can understand what is happening— not just as the actions of a broligarch and his cadre of young DOGE hackers, but as a data grab—the largest appropriation of public data by a private individual in the history of a modern state. Elon Musk may have zero experience in government, but he has proven adept at weaponizing a data-extracting platform, and he seems to be applying the lessons he learned at X to seize sensitive federal data, assume control of government payment systems, and even gain access to classified intelligence.
This phenomenon can no longer be explained through the rubric of ‘surveillance capitalism’ since the point is not merely to make money by tracking what users do. The point of DOGE appears to be to put all the data that exists about US citizens in the hands of private corporations and government employees operating outside the law. In neoliberalism, citizens become consumers; in data colonialism, citizens become subjects. If the difference is not apparent, think of how government data, down to their DNA, is used to control the Uyghur population in China. In this version of colonialism, what’s being appropriated is not land but human life through access to data.
Once we view recent events in the US through a colonial lens, the disregard for legality is also unsurprising. Historical colonialism’s doctrine of terra nullius was designed precisely to rewrite the law of new ‘colonies’ simply by the act of seizing the land, with the excuse that no one smart was using it. Strip aside the faux democratic narrative, and that’s Musk’s playbook, too. As Musk ally and Palantir cofounder Joe Lonsdale put it to the Washington Post:
“Everyone in the DC laptop class was extremely arrogant. These people don’t realize there are levels of competence and boldness that are far beyond anything in their sphere”
In other words, only DOGE’s data manipulators are smart enough to deserve to recognize the potential of government data.
The new alliance between Musk and President Donald Trump’s government might seem shocking, seen from the perspective of recent liberal capitalism. But it makes absolute sense within colonial history where lawless individuals and corporations (from the Spanish conquistador Hernán Cortés in Mexico to the British East India Company) worked in ever-closer alliances with states to produce a mature colonialism that combined corporate and sovereign power.
Until recently, there was a prospect of the US state supporting regulations to restrain Big Tech’s extractivism, in some form at least. Now, that’s a distant prospect. Yet even this shift has a colonial parallel. Initially, the Spanish crown was embarrassed by the exploits of the conquistadors and looked for legal ways to restrain them. But by the mid-16th century, those attempts at restraint were abandoned, and the path of no-holds-barred colonialism was set, only to be refined further by new colonial powers, including Holland and England. Perhaps that’s what the US government’s transformation signifies globally: a scene-setting for generalized data colonialism, with China as the second pole, just as historical colonialism supported multiple rival powers.
Unless, that is, resistance emerges. What might resistance look like if understood through the lens of colonial history?
We should not rule out regulatory interventions outside the US having some effect. However, to have any chance of success, national governments are going to need to form some large alliances. An alliance of Europe and Brazil, possibly with the UK, Australia, and others, would be formidable against US power, especially if implicated in a wider trade war from which the US can expect only a pyrrhic victory.
New regulatory proposals are needed to address global data extraction as it is—an unacceptable continuation of colonial power—and to forge alternatives beyond what the US and China currently offer.
But regulation won’t be enough on its own, so entrenched is the power of data colonialists. The prospects of legal challenges in the US itself are entirely uncertain, depending ultimately, in some cases, on which way the conservative-dominated Supreme Court will turn. For effective resistance, something more like a popular revolt will be needed across many countries.
What about US federal workers and, more broadly, users of US federal services? Can they kickstart wider resistance by protesting the new administration’s most egregious actions? Rutgers University labor studies professor Eric Blanc recently argued Musk would be vulnerable to the combined efforts of federal workers and their unions. The history of the indignados movement in Spain following the 2008 financial crisis may also offer pointers.
However, the longer-term success of worker and user resistance will likely depend on the global resonances that US activism generates.
Current wider geopolitics will inevitably constrain many governments from challenging the vision of largely unrestrained AI and tech platforms that the Trump administration and Big Tech want to force on the world. That’s why popular and worker resistance will be essential: issues such as sustainability, energy use, and the protection of workers are universal cross-border issues.
Ultimately, the businesses from which the broligarchs profit are global. The new US administration poses risks for countless nations in relation to data platforms, AI, and many other areas. That’s why a long-term global historical perspective is needed. For that perspective, we can turn to the five-centuries-long combination of capitalism and colonialism that has now entered a crucial new phase.
This post was originally published by Tech Policy Press and is reposted with thanks. It gives the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.
The elite contradictions of generative AI
LSE’s Asher Kessler and Professor Nick Couldry reflect here on a recent essay by Dario Amodei, CEO of Anthropic, in which he offers a vision of the future and of AI’s role in it. Amodei, who was interviewed this week by the Financial Times, predicts that AI will radically accelerate human progress and alter our world. Is this the future we want?
LSE’s Asher Kessler and Professor Nick Couldry reflect here on a recent essay by Dario Amodei, CEO of Anthropic, in which he offers a vision of the future and of AI’s role in it. Amodei, who was interviewed this week by the Financial Times, predicts that AI will radically accelerate human progress and alter our world. Is this the future we want?
In October 2024, the CEO of one of the most important artificial intelligence (AI) companies in the world published a 40 page essay in which he imagined the future. Dario Amodei is the CEO of Anthropic, a company he co-founded in 2021 to research and deploy AI in an explicitly safe and steerable way. In Machines of Loving Grace, Amodei predicts that over the next decade, humans maybe able to eliminate most forms of cancer, prevent all infectious disease, and double our lifespans. With the radical power of AI, we can accelerate, according to Amodei, our “biological freedom”; that is, our freedom to overcome the constraints of biology. It is clear that Amodei wants our attention.
The essay starts in a sober, scientific tone, with Amodei distancing himself from Silicon Valley hype about the ‘singularity’ and even the term ‘artificial general intelligence’ (AGI). But that does not stop him developing a very expansive view of how AI will change our lives: across biology and physical health, neuroscience and mental health, economic development, war and peace, and finally work and meaning. Even though he avoids the term AGI, he believes that extremely powerful forms of AI will be with us by 2026.
The result, according to Amodei, is that soon, “a larger fraction of people’s lives” will be spent experiencing “extraordinary moments of revelation, creative inspiration, compassion…”. Harnessing the immediate potential of AI will lead us to drugs that can make every “brain behave a bit better” and more consistently manifest feelings of “love and transcendence”. Alongside ‘biological freedom’ we will gain ‘neurological freedom’ – if, that is, we devolve much of the management of our bodies and minds to AI.
For Amodei, all this is possible, even probable, because AI will do more than add specific innovations: more fundamentally, AI will radically accelerate the rate of progress. In fact, Amodei predicts that over the next five to ten years, we may experience what ordinarily would be 50-100 years of transformation. And here comes his key image: we could be entering a “compressed 21stcentury” of progress.
Yet Amodei acknowledges some limitations. It is less likely,he argues, that global inequality will be reduced, or economic growth will be shared. Nor, even with AI, is our the future one in which democracy or peace islikely to be secured. On the contrary, “the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely” in Amodei’s AI future.
At this point let’s pause, and ask why in Amodei’s essay certain things are depicted as probable, whilst other phenomena drift out of the realm of possibility. Why, in spite of AI’s extraordinary powers, does it give us a future in which governing through democracy, or living with less inequality, seem less possible than us living until the age of 160? And what does this bifurcation reveal about the ideological assumptions that underlie how Amodei, and other Silicon Valley leaders, imagine the world and our future?
Let’s take the example of democracy and democratic values.
In Amodei’s essay, there is a peculiar relationship to democracy. Yes, some of democracy’s essential functions may be handled better: he even envisions ‘AI finance ministers’. In what seems to be a welcome realism, Amodei anticipates a future in which democracies are less likely to exist, something that – unlike some other Silicon Valley leaders (notably Peter Thiel) – he regrets. But Amodei also stresses how our inefficient democratic governments constrain and limit the true potential of AI. But throughout the essay, there is a complete silence on what democracy entails,and what it means for people’s lives. Democracy is after all, the ability ofpeople to come together and collectively decide on what sort of future world they want to live in. In Amodei’s narrative, democracy and democratic values seem to be erased, or at least ignored, so it is perhaps unsurprising that he sees no reason to be optimistic about their survival. This erasure of democracy’s actual practice is hardly new.
Writing in the 1950s, against the backdrop of the space race,the philosopher Hannah Arendt warned that if we allow science and technology to capture ourability to imagine the future, we will abandon an older faith in collective agency. Whereas previously the future seemed open (in that it was imagined asat least partly the product of open-ended collective decision-making), today the colonization of the future by science and technology seems to have already captured and closed off the future, equating it to never-ending technological breakthroughs under corporate control, rather than what people come to decide in the future. As Satya Nadella, CEO of Microsoft (lead partner of OpenAI, which launched ChatGPT), put it chillingly in his 2017 autobiography: ‘we retrofit for the future’.
Put another way, if (as Silicon Valley seems to demand) we enable the arc of scientific and technological progress to colonize our future, this radically restricts humans from asking perhaps the most important political and social question: “what are we doing (and why)?” Arendt demands that we go on asking this question, which is fundamentally political, not technological:
“The question is only whether we wish to use our new scientific and technical knowledge in this direction, and this question cannot be decided by scientific means; it is a political question of the first order” (The Human Condition, 1958, p.3)
Do we want a future in which some people, almost certainly the richest, almost certainly concentrated in Western countries, double their life expectancy, while others’ life expectancy remains largely unchanged? Do we want a future without democracy? Do we indeed accept a world in which biologists like Amodei (biology is the expertise that he emphasises in his essay, although he also claims an earlier familiarity with neuroscience) have a privileged foresight of a future whose design tools and mechanisms they already control? Should the remarkable calculative feat of AlphaFold in predicting protein structure at inhuman speed (Amodei’s lead example) really dominate our debates about the social benefits and possible harms of AI?
These surely are the questions we can and must ask ourselves. To do so, we must rebuild faith in our agency to take back control of the future that the Big Tech visionaries and oligarchs of the past two decades have captured for themselves.
This post represents the views of the authors and not the position of the Media@LSE blog nor of the London School of Economics and Political Science.
Today’s colonial “data grab” is deepening global inequalities
What are the parallels between earlier stages of colonialism and today’s digital world? Nick Couldry and Ulises A. Mejias argue that instead of a land grab, we are today witnessing today a data grab whereby our lives, in all their aspects, are being captured and converted into commercial profits. How does this new era of informational power deepen existing global inequalities?
What are the parallels between earlier stages of colonialism and today’s digital world? Nick Couldry and Ulises A. Mejias argue that instead of a land grab, we are today witnessing today a data grab whereby our lives, in all their aspects, are being captured and converted into commercial profits. How does this new era of informational power deepen existing global inequalities?
The worker who knows that every movement he makes, every gesture and every delay, however slight, will be tracked and scored by his employer. The child whose every response, every experiment and every mistake is recorded by an “EdTech” platform that never forgets or forgives. The woman who discovers that all the information she records on a fitness app is being sold to third parties with unknown impacts on her health insurance premiums.
Each case captures a very modern form of vulnerability that depends on a huge inequality of informational power. The three cases might seem unconnected in their details, but they are all part of a single phenomenon: a data grab whereby our lives, in all their aspects, are being captured and converted into profits that benefits corporations more than they benefit us.
The individual cases may well sound familiar, but the scale of the wider pattern probably is not. We are used to doing deals with individual services (clicking Yes to their impenetrable terms and conditions statements), but the larger picture tends to elude us, because it is intentionally being hidden from view. Behind the curtain of concepts like “convenience” and “progress” lies the audacity of an industry that claims that our lives are “just there” as an input for them to process and exploit for value.
It is easy to forget that this data grab is only possible on the basis of a form of inequality that simply wasn’t practicable four decades ago. Not because there weren’t businesses willing to exploit us in every way they could, but because around thirty years ago a completely new form of computer-based infrastructure emerged, connecting billions of computers and recording all interactions we had with them in the form of data. In itself, this might not have been a problem. What was crucial was the handing over of control of this infrastructure to commercial corporations, who developed business models that ruthlessly exploited those data traces – those digital footprints – and the new forms of targeted marketing and behavioural prediction that analysing those traces made possible.
And so, in the era that we usually associate with the birth of a new type of freedom (the online world), a new type of inequality was born: the inequality that derives from governing data territories – spaces built so that everything we do there is automatically captured as data under the exclusive control of that territory’s owner.
The most familiar form of data territory is the digital platform. The most familiar form of platform is social media. Over the past decade numerous scandals have become associated with social media platforms, scandals that are still largely unresolved while the platforms continue to be only partly regulated. But those scandals are merely symptoms of a much wider inequality of power over how data is extracted, stored, processed, and applied. That inequality lies at the heart of what we call “data colonialism”.
The term might be unsettling, but we believe it is appropriate. Pick up any business textbook and you will never see the history of the past thirty years described this way. A title like Thomas Davenport’s Big Data at Work spends more than two hundred pages celebrating the continuous extraction of data from every aspect of the contemporary workplace, without once mentioning the implications for those workers. EdTech platforms and the tech giants like Microsoft that service them talk endlessly about the personalisation of the educational experience, without ever noting the huge informational power that accrues to them in the process. Health product providers of all sorts rarely mention in their product descriptions the benefits they receive from getting access to our data in the growing market for health-related data.
EdTech platforms and the tech giants that service them talk endlessly about the personalisation of the educational experience, without ever noting the huge informational power that accrues to them in the process
This is a pattern whose outlines has its most obvious historical antecedents in the landgrab that launched colonialism five centuries ago: a landgrab that reimagined much of the world as newly dependent territories and resource stockpiles for the benefit of a few nations in Europe. Today, what’s being grabbed is not land, but data, and through data, access to human life as a new asset for direct exploitation.
Some think that colonialism as an economic force ended before capitalism properly got under way, and that colonialism was consigned to the past when the institutions of empire finally collapsed in the 1960’s. But the neocolonial influences of historical colonialism live on in today’s unequal global economy and embedded racism, and those inequalities are perpetuated by data colonialism.
The neocolonial influences of historical colonialism live on in today’s unequal global economy and embedded racism – and those inequalities are perpetuated by data colonialism
More than that, the ways of thinking about the world and its populations, about who has a prior claim on resources and the authority of science, live on in a process that Peruvian sociologist Anibal Quijano called “coloniality”. Coloniality – colonial thinking about how knowledge is produced and by who – is the clearest explanation for the sheer audacity of today’s AI giants who see fit to treat everything humanity has produced to date as fodder for their large language and other models.
In our recent book, Data Grab: The new Colonialism of Big Tech and how to fight back, we try to make sense of the parallels between the earlier stages of colonialism and today’s digital world. Doing so also helps us understand the ways in which the racial inequalities that are the legacy of earlier stages of colonialism go on being reproduced in the supposedly scientific guise of algorithmic data and AI processing today. Consider the forms of discrimination that black American sociologists Ruha Benjamin and Safiya Noble have outlined, or the hidden forms of work in the Global South that, as Ethiopian data scientist Timnit Gebru and others have shown, make a huge contribution to training the algorithms of so-called “artificial” intelligence in ways that is rarely recognised by the Big Tech industry.
The ongoing realities of five hundred years of colonialism live on, and are now converging with new inequalities associated with a data grab whose technical means only emerged three to four decades ago. Indeed, as earlier in history, the first step towards resisting this vast and all-encompassing social order is to name it for what is. Not just the latest improvement in capitalist techniques, but a new stage of colonialism’s ongoing appropriation of the world’s resources for the benefit of a few.
AI Companies Want to Colonize Our Data. Here’s How We Stop Them.
Artificial Intelligence companies are imposing a new “Doctrine of Discovery” on our digital commons, but we can resist.
Artificial Intelligence companies are imposing a new “Doctrine of Discovery” on our digital commons, but we can resist.
In recent months, a number of novelists, artists and newspapers have sued generative artificial intelligence (AI) companies for taking a “free ride” on their content. These suits allege that the companies, which use that content to train their machine learning models, may be breaking copyright laws.
From the tech industry’s perspective, this content mining is necessary in order to build the AI tools that tech companies say will supposedly benefit all of us. In a recent statement to legislative bodies, OpenAI claimed that “it would be impossible to train today’s leading AI models without using copyrighted materials.” It remains to be seen if courts will agree, but it’s not looking good for content creators. In February, a California court dismissed large portions of a case brought by Sarah Silverman and other authors.
Some of these cases may reveal ongoing negotiations, as some companies figure out how to pressure others into sharing a piece of the AI pie. Publisher Axel Springer and the social media platform Reddit, for example, have recently made profitable deals to license their content to AI companies. Meanwhile, a legislative attempt in the United Kingdom that would have protected content generated by the creative industries has been abandoned.
But there is a larger social dilemma involved here that might not be as easy to detect: What about our content — content that we don’t usually associate with copyright laws, like emails, photos and videos uploaded to various platforms? There are no high-profile court cases around that. And yet, the appropriation of this content by generative AI reveals a monumental social and cultural transformation.
It’s easy to miss this transformation, because after all, this kind of content is considered a sort of commons that nobody owns. But the appropriation of this commons entails a kind of injustice and exploitation that we are still struggling to name, one not captured in the copyright cases. It’s a kind of injustice that we’ve seen before in history, whenever someone claims ownership of a resource because it was just there for the taking.
In the early phases of colonialism, colonizers such as the British claimed that Australia, the continent they recently “discovered,” was in legal terms “terra nullius” — no one’s land — even though it had been inhabited for millennia. This was known as the Doctrine of Discovery, a colonial version of “finders, keepers.”
Such claims have been echoed more recently by corporations wanting to treat our digital content and even our biometric data as a mere exhaust that’s just there to be exploited. The Doctrine of Discovery survives today in a seamless move from cheap land to cheap labor to cheap data, a phenomenon we call “data colonialism.” The word “colonialism” is not being used metaphorically here, but to describe a very real emerging social order based not on the extraction of natural resources or labor, but on the continuous appropriation of human life through data. Data colonialism helps us understand today’s transformations of social life as extensions of a long historical arc of dispossession. All of human culture becomes the raw material that is fed to a commercial AI machine from which huge profits are expected. Earlier this year, OpenAI began a fundraising round for $7 trillion, “more than the combined gross domestic products of the UK and France,” as the Financial Times put it.
What really matters is not so much whether generative AI’s outputs plagiarize the content of famous authors owned by powerful media groups. The real issue is a whole new model of profit-making that treats our lives in data form as its free input. This profitable data grab, of which generative AI is just an egregious example, is really part of a larger power struggle with an extensive history.
To challenge this, we need to go beyond the narrow lens of copyright law and recover a broader view of why extractivism, under the guise of discovery, is wrong. Today’s new — and so far largely uncontested — conversion of our lives and cultures into colonized data territories will define the relations between Big Tech and the rest of us for decades, if not centuries. Once a resource has been appropriated, it is almost impossible to claim it back, as evidenced by the fact that the Doctrine of Discovery is still cited in contemporary government decisions to deny Indigenous people rights over their lands.
As with land, so too with data. Do nothing, and we will count the costs of Big Tech’s Doctrine of Discovery for a long time to come.
Applying Historical Lessons in the Age of AI
Unfortunately, one-track approaches to confronting these problems, like quitting a particular social media platform, will not be enough. Since colonialism is a multifaceted problem with centuries of history, fighting back against its new manifestations will also require multifaceted solutions that borrow from a rich anti-colonial tradition.
The most important tool in this struggle is our imagination. Decolonizing data needs to become a creative and cultural movement. It is true that no colonized society has managed to decisively and permanently undo colonialism. But even when colonial power could not be resisted with the body, it could be resisted with the mind. Collective ingenuity will be our most valuable asset.
All of human culture becomes the raw material that is fed to a commercial AI machine from which huge profits are expected
In our recent book Data Grab: The New Colonialism of Big Tech and How to Fight Back, we outline a number of practical ways in which we can begin to apply this kind of creative energy. We borrow a model from Latin American and Latine activists, who encourage us to act simultaneously across three different levels: within the system, against the system and beyond the system. Limiting ourselves to only one of these levels will not be enough.
What might this look like in practice? Working within the system might mean continuing to push our governments to do what they have so far largely failed to do: Regulate Big Tech by passing anti-trust laws, consumer protection laws and laws that protect our cultural work and heritage. It might seem tempting to want to abandon mainstream politics, but doing so would be counterproductive in the long term.
But we cannot wait for the system to fix itself. This means we need to work against the system, embracing the politics and aesthetics of resistance as decolonial movements have done for centuries. There are plenty of inspiring examples, including those involving unionization, workers’ rights, Indigenous data sovereignty, environmental organizing, and movements against the use of data technologies to carry out wars, surveillance, apartheid and the persecution of migrants.
Finally, we need to think beyond the system, building ways of limiting data exploitation and redirecting the use of data toward more social, democratic goals. This is perhaps the most difficult but most important task. It will require new technologies as well as new ways of rejecting technology. A large collective and imaginative effort is needed to resist data colonialism’s new injustices. This effort is a crucial step on the longer journey to confronting and reversing colonialism itself.
Are we giving away too much online?
Do we really know how much data we’re giving away and how it’s being used? A new book by Nick Couldry and Ulises Mejias explores the murky world of big tech and how we can fight back.
Do we really know how much data we’re giving away and how it’s being used? A new book by Nick Couldry and Ulises Mejias explores the murky world of big tech and how we can fight back.
Do you use social media? Shop online? Use a fitness tracker? Have a smart meter in your house? Chat with friends on messaging apps?
So many of our daily activities now take place online, it’s hard to imagine our lives without these services at our fingertips. But how often do you check the terms and conditions when downloading an app or signing up to an online account? How much do you know about the data that you’re giving away and how it’s being used?
In a new book, Data Grab, Professor Nick Couldry from the Department of Media and Communications at LSE and his co-author Professor Ulises A Mejias, a Mexican/US author from State University of New York Oswego, explore how big tech companies use our data and how it can be repackaged to manipulate our views, track our movements and discriminate against us.
They argue that through this “data grab”, colonialism – which was historically a land grab of natural resources, exploitative labour, and private property – has taken on a new form where big tech companies control and exploit our data for profit.
A new land grab could be happening right now, right in front of our eyes, through human life being captured in the form of data.
The new colonialism
When undertaking research for the book, Professors Couldry and Mejias found data was being extracted from every aspect of human life. “We realised the closest parallel was in the colonial land grab that happened around 1500 when Spain and Portugal suddenly realised there was a whole new world they could grab for themselves,” Professor Couldry says.
“It seemed to us this was a good analogy for the serious scale of what’s happening with data and that's when we started developing a framework for data colonialism. We weren't the first people to come up with this term, but we were the first people to see this as not just a metaphor but a new stage in the evolution of colonialism. What if colonialism could evolve? And that a new land grab could be happening right now, right in front of our eyes, through human life being captured in the form of data?”
A curated universe
Professor Couldry argues we’re at a moment where we are facing a radical change in social life, which will “become enforced until there is no way out of it” and we become ever more reliant on these services.
“We are increasingly going to be locked into a completely curated universe which is governed by corporations rather than ourselves,” he warns. We are already starting to see something like this in China, for example, where the platform WeChat – which started off as an app to chat with your friends – is now being used for all aspects of life.
You can buy goods on WeChat, get credit, submit your tax returns and deal with the government. “It has now become a complete platform for life and, as we know, Elon Musk has a similar vision for the platform X,” explains Professor Couldry.
“All these platforms work off the network effect,” he says. “The more people who are on there, the more convenient it is for you to be on there and the more inconvenient it is for you to step off.”
Professors Couldry and Mejias call this a “civilising narrative” – something which distracts us from the reality of what is going on and makes it seem more palatable, even appealing. With data extraction, we are told that it will make our lives more convenient, and we will be better connected to each other. With historical colonialism, the notions of progress or Christian salvation were often given as a justification.
We are increasingly going to be locked into a completely curated universe which is governed by corporations rather than ourselves.
The dark side of data
On a personal level, you might not be too worried about your data being collected, you might think you are resistant to its negative effects. At worst, you think it might lead to targeted adverts.
However, on a macro level, when our data is aggregated it can be used in ways we could never imagine. For example, it can be used to train algorithms to make decisions that affect large groups of people. Decisions such as whether you receive state support, are successful in a job application or have a visa approved. These algorithms can be opaque and discriminatory, leaving us with little knowledge about how a decision was made. And, like historical colonialism, the effects are usually felt most strongly by those who are already vulnerable.
And that is before we get on to the damage data collection can do to the environment. Data requires processing by huge banks of computers (known as data centres), which use a significant amount of electricity and deplete the power supply for other uses. In the book, the authors cite the example of west London where the building of much-needed new homes has been constrained until at least 2035 due to a lack of electricity supply caused by the expansion of data centres in the area.
Globally, it is estimated data centres will use between 3 and 13 per cent of all electricity globally by the year 2030, compared to the one per cent they used in 2010. This electricity creates heat which needs to be cooled down using vast amounts of fresh water. Thames Water has already expressed concern that its water supplies are getting dangerously low and data centres are a key reason behind this.
We can only change things together and we need to help each other make these changes.
How to fight back
This all paints a very bleak picture, but Professor Couldry doesn’t want us to despair. He argues this future can be averted by a large, collective effort to resist data colonialism’s injustices. “We can only change things together and we need to help each other make these changes. This is what we try and offer in the book: a new vision to help people understand that it doesn’t need to go this way.”
To offer inspiration, Data Grab provides examples of individuals and groups who are resisting. In the US, 17 communities have issued bans against the use of facial recognition software by police. Workers across the globe are taking a stand and there is an increasing number of unions for companies like Google, Apple and Amazon. Gig workers are taking matters into their own hands, exerting pressure on governments to guarantee their basic rights. Some are even undertaking “account therapy”, which involves coaxing algorithms to behave in ways more favourable to workers and counter their exploitative effects.
Whistleblowers such as Edward Snowden and Frances Haugen have helped expose US surveillance apparatus and the willingness of big companies to put profit before the safety and mental wellbeing of their users. Some companies, such as Lush cosmetics, have closed down some of their social media accounts, and taken the financial hit for doing so, due to the harmful effects of these platforms.
Not all actions have to be on a large scale. As is noted in the book, “even putting your phone down for a couple of hours might be an act of defiance". Likewise, refusing to accept cookies when visiting a website might be a form of resistance – something which apparently so far only 0.5 per cent of users do.
Professor Couldry also outlines several alternative platforms which are focused on community rather than profit and can be used instead of mainstream apps. These are known as “federated platforms”. The best-known is probably Mastodon which is an alternative to X. Pixelfed can be used for sharing photographs and PeerTube is a federated video-sharing platform.
With our lives increasingly taking place online, we are giving away more data than ever. Maybe, as Professors Couldry and Mejias state, “in the long run, a life full of smart devices is not really smart at all.” Maybe this is the time to take a stand.
Professor Nick Couldry was speaking to Charlotte Kelloway, Media Relations Manager at LSE.