When the hypernudge becomes the rule in platform advertising
LSE’s Professor Nick Couldry uses the framework of data colonialism to reflect on Meta CEO Mark Zuckerberg’s claim that his company can replace advertising agencies using AI-driven processes.
LSE’s Professor Nick Couldry uses the framework of data colonialism to reflect on Meta CEO Mark Zuckerberg’s claim that his company can replace advertising agencies using AI-driven processes.
Very big changes are underway in the world of advertising. Although I am not an advertising specialist, this blog will reflect a little from a broader civic perspective on Mark Zuckerberg’s recent claim that Meta will in future replace the ‘creative’ function of advertising agencies, even their targeting and measurement function, replacing it supposedly with an AI-driven automated process of designing, testing, and measuring the effectiveness of ads delivered direct by Meta to companies that simply want to promote their products. In the interview published by Stratechery, Zuckerberg billed Meta as ‘the ultimate business agent’ – his interviewer Ben Thompson quipped, ‘The Best Black Box of all time’, but it seems without irony.
Zuckerberg’s claim is highly contested, particularly in the advertising industry, and we’ll see over time how it works out in practice. Meanwhile, it’s useful to examine that claim in the context of much longer social transformations. I will offer a provocative way of reading what’s been going on, but I hope also a useful one: the data colonialism framework that I’ve developed with Mexican-US writer Ulises Mejias.
The hypernudge and data extraction
We’re certainly not the first theorists to sense that something really big has been going on in market societies over the past two decades, a matter not just of the technical details of advertising markets, but a change in the very space where advertising and many other things are possible. Legal scholar Karen Yeung caused a stir a decade ago with an article on what she called the ‘hypernudge’. The concept of a nudge needs no explanation, but Yeung asks: what if an individual’s behaviour is nudged at every point in their digital journey, based not on static data, but dynamically evolving data, informed by that individual’s actions and interactions? Then a real question arises about how we view the choices of that individual: are they still in any sense authentic, free choices, or are they more like the motion of a ball in a pinball machine? Or, as Yeung put it, could we ‘be slowly but surely eroding our capacity for authentic processes of self-creation and development’ that is, our capacity to be the free, choice-making citizens that liberalism has always assumed we are?
Another legal theorist, Julie Cohen, went further in her 2019 book Between Truth and Power, writing about how continuous data extraction by advertisers, platforms and countless others is changing the scope of where profit is generated in society: it’s changing the gearing of the whole capitalist machine. Cohen suggests that ‘the overriding goal of data refineries and data markets is not understanding [of consumer behaviour] but rather predictability in pursuit of profits’ (p 71, added emphasis). These data-driven operations, so standard in today’s markets, according to Cohen ‘work to maintain and stabilize the overall pool of consumer surplus so that it may be more reliably identified and easily extracted’. In other words, the very idea of communicating with people as a tool for making profits is put into question, as if, from the point of view of business function, it had become beside the point. The implications, Cohen suggests, are deep: ‘for individuals and communities, the change in status from users and consumers to resources is foundational’. Meanwhile the whole public realm comes to ‘subordinate considerations of human well-being and human self-determination to the priorities and values of powerful economic actors’ (p. 73), that is, giant data harvesters such as Meta.
These quotations, though a little abstract, get the measure of the transformation that underlies Meta’s attempt to literally take over the space of the creative advertising industries. This is more than a technological adjustment. And, for all its immediate shock value, it is anything but surprising from a longer historical perspective. I know that some respected commentators, including John Gapper of the Financial Times, are sceptical about whether Meta’s moves really mean the death of advertising creatives, but others are less sanguine. And I think this second group is right to sense that something very big is under way. So let me stay with that concern and offer a way of making sense of what Meta has been doing as more than economic restructuring: in fact, as a historically significant shift in the basic building blocks of society. This is the transformation that Ulises Mejias and I call data colonialism.
Rather than seeking to convince you of the virtues of data colonialism as a theoretical framework, I want to see what we might gain by thinking about the dramatic potential changes in the advertising industry through a colonial lens, or rather a decolonial lens, because the point would be not to accept it, but to start seriously to resist it.
Making Sense of Data Colonialism
Ulises and I like to explain data colonialism through the four levels of a strategy video game.Colonization, the video game, was Game IV in Sid Meier’s Civilisation series, launched in 1994, and relaunched in 2008. You can play the Spanish, the Portuguese, the Dutch and of course the English version. The goal is to win, which means moving through four levels:Explore, Expand, Exploit, and Exterminate. These four levels, melodramatic as they might first seem, are actually a pretty good picture of how practices of data extraction and data processing – and increasingly AI – have been unfolding across contemporary societies, and changing them. Taking each in turn, let’s consider how these four levels might help us make sense of how it came to seem natural to Mark Zuckerberg to say what he did about advertising creatives.
Explore
Let’s start with Explore. It’s really not a stretch to say that Facebook and Meta, and countless other platforms, have for two decades been exploring the expanse of social life, converting it into a domain which – in terms of the extraction of data – has very much come under their control.
Expand
Similarly with Expand: ever more of our time online, even when we aren’t on Meta platforms, is spent in spaces which automatically export data to Facebook, to build up its picture of us.
Exploit
What about Exploit? It is clearly not enough to have access to multiple domains from where you can extract data: the point for Meta of course is to profit from this. Basically, this involves two steps. First, to acquire our data at the lowest possible price, that is, for nothing. So today, instead of the cheap (ie virtually free) land that was historical colonies, Meta and countless other platforms benefit from cheap (ie virtually free) data about each of us. This doesn’t happen because we freely and consciously choose to give it up. It happens because digital platforms are designed so that it is impossible to act on them without data being extracted and stored about us. Every platform, in this sense, is what Ulises and I call a data territory. More than that, it is a territory where data can seemingly be taken pretty much for free – the only cost being the minimal one of complying with regulations for obtaining nominal consent to the data territory’s operations.
In historical colonialism, there was a legal fiction, developed to its most sophisticated form by the English, called terra nullius, no one’s land: Australia was called a terra nullius, because no one (no one important) was assumed to be there, so it could be taken for nothing and without obstruction, in the eyes of the law. Now the domains of our lives are treated as if they were data nullius – to all intents and purposes, business models operate as if that data can just be taken from us and used for the benefits of the business: the restrictions are minimal. Which means – and here’s the rub – that, once Meta had developed some of the most advanced AI tools to analyse its huge data harvest, it no longer made sense for Meta to do anything other than use that AI and that data to generate the product from which it has always wanted to profit, which was advertising.
Exterminate (or Eliminate)
And this is where the comparison gets most interesting. You’ll recall the final level of the colonization game was Exterminate. Ulises and I don’t for one moment suggest that our digital lives, our relations with platforms, or indeed advertisers’ relations with Meta, involve the hideous violence that characterised historical colonialism. Of course not. But what if we replace the emotive term ‘exterminate’, with the less emotive description: ‘eliminate previously effective and well-functioning ways of life’? That too was the consequence of historical colonialism, alongside the physical violence, making unliveable what were once perfectly well-integrated and viable forms of life. The logic of colonialism required them to be overridden by different logics, different rationalities, different ways of doing things, so that profit and resources could be extracted from them without obstruction or interference. That has been the history of colonialism.
Zuckerberg’s new vision for the ad industry
But this puts in a very different light the arguments of legal theorists Karen Yeung and Julie Cohen that the space of market society is being changed fundamentally. Zuckerberg’s vision for Meta today is about exploiting its data territories (the platforms) as directly and seamlessly as possible to generate profit: the goal is not understanding consumers’ behaviour, but extracting profit from the predictability of consumer action. It is not about using AI power to present sovereign consumers with the chance to enjoy creative messages that express their culture and their society. In fact, Zuckerberg’s vision is not about communication at all. It is about extraction: profit extraction through a continuous machine-fitting of automated product to automatically tracked response. This is indeed, recalling the phrase from Zuckerberg’s May interview, ‘the best black box of all time’.
The historic idea that there is space in market societies for communications that creatively try to persuade consumers to make choices about products, brands, and the like belongs to a very different idea of markets and the public world from that of the data colonialists, if you allow the term. For data colonialists, data is the medium, AI the ultimate seamless tool, and creative thinking (by humans about their fellow humans) is beside the point. No wonder Zuckerberg felt he could say at a conference in May that ‘Meta’s tools can find interested users better than human marketers can’.
The end of human creativity in advertising?
For data colonialists, there is nothing at all strange about overwriting the creativity of advertising agencies or anyone else with ‘machinic processes’. The replacement of advertising creatives may not be total, but, I fear, it will be progressive and accelerating – unless its root causes are confronted and resisted. What is going on here in advertising is not fundamentally different from what is under way in music or education, where battles are under way today to preserve the idea of music as more than just fuel for large AI models, or to preserve the space of the classroom as a place where dialogue between teacher and pupil can still happen (rather than just automated tracking and evaluation). Under way, through AI, across societies, is a redefinition of cognitive production – the spaces of ideas and creativity – with long-term consequences we can barely yet formulate. I’m not reassured when Stephen Pretorious, WPP’s AI lead, says, that ‘creativity, in its purest form, remains a human skill’. Exactly, but that does not mean that this purest form is going to thrive in, or even survive, the AI transition.
For a long time, it seemed as if Big Tech platforms were just part of a wider ecology alongside traditional ad agencies and other specialist companies – a ‘frenemy’ perhaps, as WPP founder Martin Sorrell once famously put it, but still for some purposes recognisably a friend – but increasingly this will cease to be the case. Yes, the largest ad agencies, like WPP, will seek to tame this beast, to live and sleep with it, but that will not make the idea of an automated advertising delivery platform any more compatible with the historically important idea, which we risk losing, that markets are places where creative communications go on.
But, you might respond, AI’s celebrators in advertising say that their goal is to releaseto overworked executives the time to engage in advertising’s truly creative processes. That is, overall, to conserve existing ways of life while exploiting the resources of the earth more rationally. But that is what the colonizers have always said.
The battle to rebuild our social media has started
Social media platforms harm mental health and democracy. Nick Couldry argues it’s time to redesign our digital spaces beyond profit-driven models.
The harms of social media platforms – from the mental health of young people to their role in creating rabbit holes for isolated adults who went on to commit atrocities – are well known. But to move forward, writes Nick Couldry, we must address the root cause of these problems: the decision to delegate the design our social spaces, at the most basic level, to profit-seeking businesses.
On 7 January 2025, Mark Zuckerberg caused widespread dismay when he announced that Meta would no longer fund fact-checking of the content that flows across its platforms, at least in the US. Content rules which had constrained offensive speech on Meta platforms were also summarily withdrawn: the Center for Countering Digital Hate estimates that almost all restraints on bad content on Meta sites will disappear.
The close alignment between Big Tech bosses and the new Trump administration has been widely noted. In a move whose politics are hard to miss, Mark Zuckerberg implied in a blog that he had in Trump a potential ally in the fight against overseas regulators that try to hold back US tech companies – an obvious reference to the EU’s regulatory authorities’ increasing activism against Meta (and X). Sam Altman of Open AI called directly for Trump’s support to remove copyright restrictions in his speech to the Paris AI summit.
The battle to redefine Big Tech’s relations to regulation and the state is “on”, heralding the start of a new era of oligarchic power. Meanwhile, for those who don’t accept super-rich entrepreneurs’ power over social media platforms, the battle is also engaged. In early January the federated comment platform Mastodon announced it was strengthening its organisational structure and raising funds to expand its operations and a group of well-known tech figures and celebrities launched the “Free Our Feeds” campaign to help fund Bluesky which, like Mastodon, is an open-source platform that is challenging X.
But what exactly is the battle about social media under way here? Why is it needed?
It is not about abandoning social media entirely. But the wider social problems that flow from the highly concentrated corporate infrastructure that supports social media have become increasingly obvious: the evidence that the mental health of young people, especially young girls, has been negatively affected social media; the links between social media content and child suicides; the role that social media have in creating rabbit holes for isolated adults who went on to commit atrocities.
In my recent book – The Space of the World: Can Human Solidarity Survive Social Media and What if it Can’t? – I ask what lies behind these seemingly disparate scandals and worries, and the corporate structure that underpins them.
The dangers of letting profit-seeking companies design our social spaces”
The issue is not the smartphone as such. I don’t follow those like Jonathan Haidt who want to restrict use of the smartphone. Smartphones have innocent and important uses: finding our route from A to B, knowing when the next bus is coming, looking up something we can’t remember on the move.
At the root of our problems with social media is instead a deeper but less noticed mistake: that we delegated to profit-seeking businesses what I call “the space of the world”. That is, the spaces where, for much of our time, we carry on our social life: indeed the hyper-space of (almost) all possible spaces where we can be social.
At the root of the problem is a deeper but less noticed mistake: that we delegated to profit-seeking businesses “the space of the world” – the spaces where, for much of our time, we carry on our social life
The space of the world is about much more than technology or code: because we spend so much of our time on social media platforms, their space literally is the space in which we live. The Netflix drama Adolescence illustrates this with great vividness.
Until recent decades, the space of the world emerged slowly in response to emerging technologies, but no one ever designed or sought to manage it (kings and emperors would have loved to have that power, but it didn’t exist). But that has changed. The possibility of actually designing the space of the world only emerged from the opening of the internet and world wide web three decades ago, when every point in space became connected to every other point. This created the risk that bad content from somewhere – from anywhere – might spread more widely. So, what did we do about that risk? We designed platforms that, instead of mitigating the risks of online bad content that were already apparent, worked to amplify that risk – for profit.
Over two decades, we have allowed the wrong space of the world to be built: one driven by business models that rely on tracking everything we do and from pushing us content that generates “engagement”. What that really means is we are served content which grabs our attention and it is this – our attention in the social media space – which is sold, for profit, to advertisers and other actors willing to pay for it. In short, we have delegated to businesses the profitable exploitation of the social air we breathe.
The Space of the World tries to put into larger perspective the many things going wrong with social media, as well as the good things, and how we might address the problems in a far-reaching, not piecemeal, way.
Why building solidarity is so hard in the current system
One example concerns the evolution of commercial platforms into polarisation machines that, for profit, exploit human beings’ inherent tendencies to form in-groups and out-groups (the mindset of “us vs them”) – tendencies which social psychology from as far back as half a century ago suggested we should be wary of.
Yes, in the short-term, solidarity can be built online, because of social media’s ability to mobilise us quickly. But, for the longer-term, polarising social media platforms work to undermine all forms of authority, and on a global scale. No political theorist ever thought a global scale compatible with civil politics – most national scales have enough problems. The multiple problems with our politics today, while they have additional causes, therefore followed almost inevitably with the advent of the social media platforms that have been built over the past two decades.
The result is that a commercially shaped space of the world makes solidarity ever harder to build. Solidarity means finding a common stake in acting together with people who are not like us. Meanwhile, the problems humanity faces (solving which needs more, not less, solidarity) get ever more terrifying, above all, the climate emergency and rising global inequality. Yet we are condemned to spend most of our social time on platforms that exploit that time and steer it towards whatever noisy engagement generates profit. We have toxified our common life and our politics.
That is why today we must work to try and reverse these developments. In the book’s final chapter, I consider our options. The point is not to abolish social media – as if we could unlearn all the past two decades have taught us! – but to rebuild our platforms as if the “social” in social media mattered.
The point is not to abolish social media – as if we could unlearn all the past two decades have taught us – but to rebuild our platforms as if the “social” in social media mattered.
Practical models already exist. Federated social media, such as Mastodon and Pixelfed, don’t exist to extract data or profit and are built to a smaller scale that can be managed by communities. Bluesky is built on an open-source technical protocol that would allow future federation.
True, federated social media haven’t yet taken off on a scale to rival commercial social media. But there’s a very clear reason for that. Right now, we don’t have a market in social media, just monopolies: when you want to leave a platform, you can’t take your contacts or your conversations and images with you! Why not, as Cory Doctorow and others have suggested, force big platforms to allow the transferability of our contacts and our histories? It’s also down to us. Social media are part of our everyday habits, so it’s not easy or convenient to imagine changing them. But if we work together, we stand a much better chance, and the foundations have already been laid. Do nothing, and we risk staying within the trap that commercial social media has laid for us, condemned to relive the dangers to our collective social and political life that they pose. In The Space of the World, I look at ways to spring that trap and help us rebuild a better social and political world together.
Data colonialism comes home to the US: Resistance must too
LSE’s Professor Nick Couldry and SUNY Oswego’s Professor Ulises A. Mejias explain how developments in the US government can be seen through the lens of data colonialism, and what can be done to resist.
Elon Musk’s radical intervention in the US government through the Department of Government Efficiency (DOGE) has been called an “AI coup,” a “national heist,” and a “power grab.” Various experts are concerned that it is unconstitutional. But beyond its legal ramifications, the parts of it that involve getting access to government data fit well within the playbook of what we call data colonialism.
It is only through the lens of colonialism that we can understand what is happening— not just as the actions of a broligarch and his cadre of young DOGE hackers, but as a data grab—the largest appropriation of public data by a private individual in the history of a modern state. Elon Musk may have zero experience in government, but he has proven adept at weaponizing a data-extracting platform, and he seems to be applying the lessons he learned at X to seize sensitive federal data, assume control of government payment systems, and even gain access to classified intelligence.
This phenomenon can no longer be explained through the rubric of ‘surveillance capitalism’ since the point is not merely to make money by tracking what users do. The point of DOGE appears to be to put all the data that exists about US citizens in the hands of private corporations and government employees operating outside the law. In neoliberalism, citizens become consumers; in data colonialism, citizens become subjects. If the difference is not apparent, think of how government data, down to their DNA, is used to control the Uyghur population in China. In this version of colonialism, what’s being appropriated is not land but human life through access to data.
Once we view recent events in the US through a colonial lens, the disregard for legality is also unsurprising. Historical colonialism’s doctrine of terra nullius was designed precisely to rewrite the law of new ‘colonies’ simply by the act of seizing the land, with the excuse that no one smart was using it. Strip aside the faux democratic narrative, and that’s Musk’s playbook, too. As Musk ally and Palantir cofounder Joe Lonsdale put it to the Washington Post:
“Everyone in the DC laptop class was extremely arrogant. These people don’t realize there are levels of competence and boldness that are far beyond anything in their sphere”
In other words, only DOGE’s data manipulators are smart enough to deserve to recognize the potential of government data.
The new alliance between Musk and President Donald Trump’s government might seem shocking, seen from the perspective of recent liberal capitalism. But it makes absolute sense within colonial history where lawless individuals and corporations (from the Spanish conquistador Hernán Cortés in Mexico to the British East India Company) worked in ever-closer alliances with states to produce a mature colonialism that combined corporate and sovereign power.
Until recently, there was a prospect of the US state supporting regulations to restrain Big Tech’s extractivism, in some form at least. Now, that’s a distant prospect. Yet even this shift has a colonial parallel. Initially, the Spanish crown was embarrassed by the exploits of the conquistadors and looked for legal ways to restrain them. But by the mid-16th century, those attempts at restraint were abandoned, and the path of no-holds-barred colonialism was set, only to be refined further by new colonial powers, including Holland and England. Perhaps that’s what the US government’s transformation signifies globally: a scene-setting for generalized data colonialism, with China as the second pole, just as historical colonialism supported multiple rival powers.
Unless, that is, resistance emerges. What might resistance look like if understood through the lens of colonial history?
We should not rule out regulatory interventions outside the US having some effect. However, to have any chance of success, national governments are going to need to form some large alliances. An alliance of Europe and Brazil, possibly with the UK, Australia, and others, would be formidable against US power, especially if implicated in a wider trade war from which the US can expect only a pyrrhic victory.
New regulatory proposals are needed to address global data extraction as it is—an unacceptable continuation of colonial power—and to forge alternatives beyond what the US and China currently offer.
But regulation won’t be enough on its own, so entrenched is the power of data colonialists. The prospects of legal challenges in the US itself are entirely uncertain, depending ultimately, in some cases, on which way the conservative-dominated Supreme Court will turn. For effective resistance, something more like a popular revolt will be needed across many countries.
What about US federal workers and, more broadly, users of US federal services? Can they kickstart wider resistance by protesting the new administration’s most egregious actions? Rutgers University labor studies professor Eric Blanc recently argued Musk would be vulnerable to the combined efforts of federal workers and their unions. The history of the indignados movement in Spain following the 2008 financial crisis may also offer pointers.
However, the longer-term success of worker and user resistance will likely depend on the global resonances that US activism generates.
Current wider geopolitics will inevitably constrain many governments from challenging the vision of largely unrestrained AI and tech platforms that the Trump administration and Big Tech want to force on the world. That’s why popular and worker resistance will be essential: issues such as sustainability, energy use, and the protection of workers are universal cross-border issues.
Ultimately, the businesses from which the broligarchs profit are global. The new US administration poses risks for countless nations in relation to data platforms, AI, and many other areas. That’s why a long-term global historical perspective is needed. For that perspective, we can turn to the five-centuries-long combination of capitalism and colonialism that has now entered a crucial new phase.
This post was originally published by Tech Policy Press and is reposted with thanks. It gives the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.
The elite contradictions of generative AI
LSE’s Asher Kessler and Professor Nick Couldry reflect here on a recent essay by Dario Amodei, CEO of Anthropic, in which he offers a vision of the future and of AI’s role in it. Amodei, who was interviewed this week by the Financial Times, predicts that AI will radically accelerate human progress and alter our world. Is this the future we want?
In October 2024, the CEO of one of the most important artificial intelligence (AI) companies in the world published a 40 page essay in which he imagined the future. Dario Amodei is the CEO of Anthropic, a company he co-founded in 2021 to research and deploy AI in an explicitly safe and steerable way. In Machines of Loving Grace, Amodei predicts that over the next decade, humans maybe able to eliminate most forms of cancer, prevent all infectious disease, and double our lifespans. With the radical power of AI, we can accelerate, according to Amodei, our “biological freedom”; that is, our freedom to overcome the constraints of biology. It is clear that Amodei wants our attention.
The essay starts in a sober, scientific tone, with Amodei distancing himself from Silicon Valley hype about the ‘singularity’ and even the term ‘artificial general intelligence’ (AGI). But that does not stop him developing a very expansive view of how AI will change our lives: across biology and physical health, neuroscience and mental health, economic development, war and peace, and finally work and meaning. Even though he avoids the term AGI, he believes that extremely powerful forms of AI will be with us by 2026.
The result, according to Amodei, is that soon, “a larger fraction of people’s lives” will be spent experiencing “extraordinary moments of revelation, creative inspiration, compassion…”. Harnessing the immediate potential of AI will lead us to drugs that can make every “brain behave a bit better” and more consistently manifest feelings of “love and transcendence”. Alongside ‘biological freedom’ we will gain ‘neurological freedom’ – if, that is, we devolve much of the management of our bodies and minds to AI.
For Amodei, all this is possible, even probable, because AI will do more than add specific innovations: more fundamentally, AI will radically accelerate the rate of progress. In fact, Amodei predicts that over the next five to ten years, we may experience what ordinarily would be 50-100 years of transformation. And here comes his key image: we could be entering a “compressed 21stcentury” of progress.
Yet Amodei acknowledges some limitations. It is less likely,he argues, that global inequality will be reduced, or economic growth will be shared. Nor, even with AI, is our the future one in which democracy or peace islikely to be secured. On the contrary, “the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely” in Amodei’s AI future.
At this point let’s pause, and ask why in Amodei’s essay certain things are depicted as probable, whilst other phenomena drift out of the realm of possibility. Why, in spite of AI’s extraordinary powers, does it give us a future in which governing through democracy, or living with less inequality, seem less possible than us living until the age of 160? And what does this bifurcation reveal about the ideological assumptions that underlie how Amodei, and other Silicon Valley leaders, imagine the world and our future?
Let’s take the example of democracy and democratic values.
In Amodei’s essay, there is a peculiar relationship to democracy. Yes, some of democracy’s essential functions may be handled better: he even envisions ‘AI finance ministers’. In what seems to be a welcome realism, Amodei anticipates a future in which democracies are less likely to exist, something that – unlike some other Silicon Valley leaders (notably Peter Thiel) – he regrets. But Amodei also stresses how our inefficient democratic governments constrain and limit the true potential of AI. But throughout the essay, there is a complete silence on what democracy entails,and what it means for people’s lives. Democracy is after all, the ability ofpeople to come together and collectively decide on what sort of future world they want to live in. In Amodei’s narrative, democracy and democratic values seem to be erased, or at least ignored, so it is perhaps unsurprising that he sees no reason to be optimistic about their survival. This erasure of democracy’s actual practice is hardly new.
Writing in the 1950s, against the backdrop of the space race,the philosopher Hannah Arendt warned that if we allow science and technology to capture ourability to imagine the future, we will abandon an older faith in collective agency. Whereas previously the future seemed open (in that it was imagined asat least partly the product of open-ended collective decision-making), today the colonization of the future by science and technology seems to have already captured and closed off the future, equating it to never-ending technological breakthroughs under corporate control, rather than what people come to decide in the future. As Satya Nadella, CEO of Microsoft (lead partner of OpenAI, which launched ChatGPT), put it chillingly in his 2017 autobiography: ‘we retrofit for the future’.
Put another way, if (as Silicon Valley seems to demand) we enable the arc of scientific and technological progress to colonize our future, this radically restricts humans from asking perhaps the most important political and social question: “what are we doing (and why)?” Arendt demands that we go on asking this question, which is fundamentally political, not technological:
“The question is only whether we wish to use our newscientific and technical knowledge in this direction, and this question cannot be decided by scientific means; it is a political question of the first order” (The Human Condition, 1958, p.3)
Do we want a future in which some people, almost certainly the richest, almost certainly concentrated in Western countries, double their life expectancy, while others’ life expectancy remains largely unchanged? Do we want a future without democracy? Do we indeed accept a world in which biologists like Amodei (biology is the expertise that he emphasises in his essay, although he also claims an earlier familiarity with neuroscience) have a privileged foresight of a future whose design tools and mechanisms they already control? Should the remarkable calculative feat of AlphaFold in predicting protein structure at inhuman speed (Amodei’s lead example) really dominate our debates about the social benefits and possible harms of AI?
These surely are the questions we can and must ask ourselves. To do so, we must rebuild faith in our agency to take back control of the future that the Big Tech visionaries and oligarchs of the past two decades have captured for themselves.
This post represents the views of the authors and not the position of the Media@LSE blog nor of the London School of Economics and Political Science.
Today’s colonial “data grab” is deepening global inequalities
What are the parallels between earlier stages of colonialism and today’s digital world? Nick Couldry and Ulises A. Mejias argue that instead of a land grab, we are today witnessing today a data grab whereby our lives, in all their aspects, are being captured and converted into commercial profits. How does this new era of informational power deepen existing global inequalities?
The worker who knows that every movement he makes, every gesture and every delay, however slight, will be tracked and scored by his employer. The child whose every response, every experiment and every mistake is recorded by an “EdTech” platform that never forgets or forgives. The woman who discovers that all the information she records on a fitness app is being sold to third parties with unknown impacts on her health insurance premiums.
Each case captures a very modern form of vulnerability that depends on a huge inequality of informational power. The three cases might seem unconnected in their details, but they are all part of a single phenomenon: a data grab whereby our lives, in all their aspects, are being captured and converted into profits that benefits corporations more than they benefit us.
The individual cases may well sound familiar, but the scale of the wider pattern probably is not. We are used to doing deals with individual services (clicking Yes to their impenetrable terms and conditions statements), but the larger picture tends to elude us, because it is intentionally being hidden from view. Behind the curtain of concepts like “convenience” and “progress” lies the audacity of an industry that claims that our lives are “just there” as an input for them to process and exploit for value.
It is easy to forget that this data grab is only possible on the basis of a form of inequality that simply wasn’t practicable four decades ago. Not because there weren’t businesses willing to exploit us in every way they could, but because around thirty years ago a completely new form of computer-based infrastructure emerged, connecting billions of computers and recording all interactions we had with them in the form of data. In itself, this might not have been a problem. What was crucial was the handing over of control of this infrastructure to commercial corporations, who developed business models that ruthlessly exploited those data traces – those digital footprints – and the new forms of targeted marketing and behavioural prediction that analysing those traces made possible.
And so, in the era that we usually associate with the birth of a new type of freedom (the online world), a new type of inequality was born: the inequality that derives from governing data territories – spaces built so that everything we do there is automatically captured as data under the exclusive control of that territory’s owner.
The most familiar form of data territory is the digital platform. The most familiar form of platform is social media. Over the past decade numerous scandals have become associated with social media platforms, scandals that are still largely unresolved while the platforms continue to be only partly regulated. But those scandals are merely symptoms of a much wider inequality of power over how data is extracted, stored, processed, and applied. That inequality lies at the heart of what we call “data colonialism”.
The term might be unsettling, but we believe it is appropriate. Pick up any business textbook and you will never see the history of the past thirty years described this way. A title like Thomas Davenport’s Big Data at Work spends more than two hundred pages celebrating the continuous extraction of data from every aspect of the contemporary workplace, without once mentioning the implications for those workers. EdTech platforms and the tech giants like Microsoft that service them talk endlessly about the personalisation of the educational experience, without ever noting the huge informational power that accrues to them in the process. Health product providers of all sorts rarely mention in their product descriptions the benefits they receive from getting access to our data in the growing market for health-related data.
EdTech platforms and the tech giants that service them talk endlessly about the personalisation of the educational experience, without ever noting the huge informational power that accrues to them in the process
This is a pattern whose outlines has its most obvious historical antecedents in the landgrab that launched colonialism five centuries ago: a landgrab that reimagined much of the world as newly dependent territories and resource stockpiles for the benefit of a few nations in Europe. Today, what’s being grabbed is not land, but data, and through data, access to human life as a new asset for direct exploitation.
Some think that colonialism as an economic force ended before capitalism properly got under way, and that colonialism was consigned to the past when the institutions of empire finally collapsed in the 1960’s. But the neocolonial influences of historical colonialism live on in today’s unequal global economy and embedded racism, and those inequalities are perpetuated by data colonialism.
The neocolonial influences of historical colonialism live on in today’s unequal global economy and embedded racism – and those inequalities are perpetuated by data colonialism
More than that, the ways of thinking about the world and its populations, about who has a prior claim on resources and the authority of science, live on in a process that Peruvian sociologist Anibal Quijano called “coloniality”. Coloniality – colonial thinking about how knowledge is produced and by who – is the clearest explanation for the sheer audacity of today’s AI giants who see fit to treat everything humanity has produced to date as fodder for their large language and other models.
In our recent book, Data Grab: The new Colonialism of Big Tech and how to fight back, we try to make sense of the parallels between the earlier stages of colonialism and today’s digital world. Doing so also helps us understand the ways in which the racial inequalities that are the legacy of earlier stages of colonialism go on being reproduced in the supposedly scientific guise of algorithmic data and AI processing today. Consider the forms of discrimination that black American sociologists Ruha Benjamin and Safiya Noble have outlined, or the hidden forms of work in the Global South that, as Ethiopian data scientist Timnit Gebru and others have shown, make a huge contribution to training the algorithms of so-called “artificial” intelligence in ways that is rarely recognised by the Big Tech industry.
The ongoing realities of five hundred years of colonialism live on, and are now converging with new inequalities associated with a data grab whose technical means only emerged three to four decades ago. Indeed, as earlier in history, the first step towards resisting this vast and all-encompassing social order is to name it for what is. Not just the latest improvement in capitalist techniques, but a new stage of colonialism’s ongoing appropriation of the world’s resources for the benefit of a few.