AI Companies Want to Colonize Our Data. Here’s How We Stop Them.
Artificial Intelligence companies are imposing a new “Doctrine of Discovery” on our digital commons, but we can resist.
In recent months, a number of novelists, artists and newspapers have sued generative artificial intelligence (AI) companies for taking a “free ride” on their content. These suits allege that the companies, which use that content to train their machine learning models, may be breaking copyright laws.
From the tech industry’s perspective, this content mining is necessary in order to build the AI tools that tech companies say will supposedly benefit all of us. In a recent statement to legislative bodies, OpenAI claimed that “it would be impossible to train today’s leading AI models without using copyrighted materials.” It remains to be seen if courts will agree, but it’s not looking good for content creators. In February, a California court dismissed large portions of a case brought by Sarah Silverman and other authors.
Some of these cases may reveal ongoing negotiations, as some companies figure out how to pressure others into sharing a piece of the AI pie. Publisher Axel Springer and the social media platform Reddit, for example, have recently made profitable deals to license their content to AI companies. Meanwhile, a legislative attempt in the United Kingdom that would have protected content generated by the creative industries has been abandoned.
But there is a larger social dilemma involved here that might not be as easy to detect: What about our content — content that we don’t usually associate with copyright laws, like emails, photos and videos uploaded to various platforms? There are no high-profile court cases around that. And yet, the appropriation of this content by generative AI reveals a monumental social and cultural transformation.
It’s easy to miss this transformation, because after all, this kind of content is considered a sort of commons that nobody owns. But the appropriation of this commons entails a kind of injustice and exploitation that we are still struggling to name, one not captured in the copyright cases. It’s a kind of injustice that we’ve seen before in history, whenever someone claims ownership of a resource because it was just there for the taking.
In the early phases of colonialism, colonizers such as the British claimed that Australia, the continent they recently “discovered,” was in legal terms “terra nullius” — no one’s land — even though it had been inhabited for millennia. This was known as the Doctrine of Discovery, a colonial version of “finders, keepers.”
Such claims have been echoed more recently by corporations wanting to treat our digital content and even our biometric data as a mere exhaust that’s just there to be exploited. The Doctrine of Discovery survives today in a seamless move from cheap land to cheap labor to cheap data, a phenomenon we call “data colonialism.” The word “colonialism” is not being used metaphorically here, but to describe a very real emerging social order based not on the extraction of natural resources or labor, but on the continuous appropriation of human life through data. Data colonialism helps us understand today’s transformations of social life as extensions of a long historical arc of dispossession. All of human culture becomes the raw material that is fed to a commercial AI machine from which huge profits are expected. Earlier this year, OpenAI began a fundraising round for $7 trillion, “more than the combined gross domestic products of the UK and France,” as the Financial Times put it.
What really matters is not so much whether generative AI’s outputs plagiarize the content of famous authors owned by powerful media groups. The real issue is a whole new model of profit-making that treats our lives in data form as its free input. This profitable data grab, of which generative AI is just an egregious example, is really part of a larger power struggle with an extensive history.
To challenge this, we need to go beyond the narrow lens of copyright law and recover a broader view of why extractivism, under the guise of discovery, is wrong. Today’s new — and so far largely uncontested — conversion of our lives and cultures into colonized data territories will define the relations between Big Tech and the rest of us for decades, if not centuries. Once a resource has been appropriated, it is almost impossible to claim it back, as evidenced by the fact that the Doctrine of Discovery is still cited in contemporary government decisions to deny Indigenous people rights over their lands.
As with land, so too with data. Do nothing, and we will count the costs of Big Tech’s Doctrine of Discovery for a long time to come.
Applying Historical Lessons in the Age of AI
Unfortunately, one-track approaches to confronting these problems, like quitting a particular social media platform, will not be enough. Since colonialism is a multifaceted problem with centuries of history, fighting back against its new manifestations will also require multifaceted solutions that borrow from a rich anti-colonial tradition.
The most important tool in this struggle is our imagination. Decolonizing data needs to become a creative and cultural movement. It is true that no colonized society has managed to decisively and permanently undo colonialism. But even when colonial power could not be resisted with the body, it could be resisted with the mind. Collective ingenuity will be our most valuable asset.
All of human culture becomes the raw material that is fed to a commercial AI machine from which huge profits are expected
In our recent book Data Grab: The New Colonialism of Big Tech and How to Fight Back, we outline a number of practical ways in which we can begin to apply this kind of creative energy. We borrow a model from Latin American and Latine activists, who encourage us to act simultaneously across three different levels: within the system, against the system and beyond the system. Limiting ourselves to only one of these levels will not be enough.
What might this look like in practice? Working within the system might mean continuing to push our governments to do what they have so far largely failed to do: Regulate Big Tech by passing anti-trust laws, consumer protection laws and laws that protect our cultural work and heritage. It might seem tempting to want to abandon mainstream politics, but doing so would be counterproductive in the long term.
But we cannot wait for the system to fix itself. This means we need to work against the system, embracing the politics and aesthetics of resistance as decolonial movements have done for centuries. There are plenty of inspiring examples, including those involving unionization, workers’ rights, Indigenous data sovereignty, environmental organizing, and movements against the use of data technologies to carry out wars, surveillance, apartheid and the persecution of migrants.
Finally, we need to think beyond the system, building ways of limiting data exploitation and redirecting the use of data toward more social, democratic goals. This is perhaps the most difficult but most important task. It will require new technologies as well as new ways of rejecting technology. A large collective and imaginative effort is needed to resist data colonialism’s new injustices. This effort is a crucial step on the longer journey to confronting and reversing colonialism itself.
Are we giving away too much online?
Do we really know how much data we’re giving away and how it’s being used? A new book by Nick Couldry and Ulises Mejias explores the murky world of big tech and how we can fight back.
Do you use social media? Shop online? Use a fitness tracker? Have a smart meter in your house? Chat with friends on messaging apps?
So many of our daily activities now take place online, it’s hard to imagine our lives without these services at our fingertips. But how often do you check the terms and conditions when downloading an app or signing up to an online account? How much do you know about the data that you’re giving away and how it’s being used?
In a new book, Data Grab, Professor Nick Couldry from the Department of Media and Communications at LSE and his co-author Professor Ulises A Mejias, a Mexican/US author from State University of New York Oswego, explore how big tech companies use our data and how it can be repackaged to manipulate our views, track our movements and discriminate against us.
They argue that through this “data grab”, colonialism – which was historically a land grab of natural resources, exploitative labour, and private property – has taken on a new form where big tech companies control and exploit our data for profit.
A new land grab could be happening right now, right in front of our eyes, through human life being captured in the form of data.
The new colonialism
When undertaking research for the book, Professors Couldry and Mejias found data was being extracted from every aspect of human life. “We realised the closest parallel was in the colonial land grab that happened around 1500 when Spain and Portugal suddenly realised there was a whole new world they could grab for themselves,” Professor Couldry says.
“It seemed to us this was a good analogy for the serious scale of what’s happening with data and that's when we started developing a framework for data colonialism. We weren't the first people to come up with this term, but we were the first people to see this as not just a metaphor but a new stage in the evolution of colonialism. What if colonialism could evolve? And that a new land grab could be happening right now, right in front of our eyes, through human life being captured in the form of data?”
A curated universe
Professor Couldry argues we’re at a moment where we are facing a radical change in social life, which will “become enforced until there is no way out of it” and we become ever more reliant on these services.
“We are increasingly going to be locked into a completely curated universe which is governed by corporations rather than ourselves,” he warns. We are already starting to see something like this in China, for example, where the platform WeChat – which started off as an app to chat with your friends – is now being used for all aspects of life.
You can buy goods on WeChat, get credit, submit your tax returns and deal with the government. “It has now become a complete platform for life and, as we know, Elon Musk has a similar vision for the platform X,” explains Professor Couldry.
“All these platforms work off the network effect,” he says. “The more people who are on there, the more convenient it is for you to be on there and the more inconvenient it is for you to step off.”
Professors Couldry and Mejias call this a “civilising narrative” – something which distracts us from the reality of what is going on and makes it seem more palatable, even appealing. With data extraction, we are told that it will make our lives more convenient, and we will be better connected to each other. With historical colonialism, the notions of progress or Christian salvation were often given as a justification.
We are increasingly going to be locked into a completely curated universe which is governed by corporations rather than ourselves.
The dark side of data
On a personal level, you might not be too worried about your data being collected, you might think you are resistant to its negative effects. At worst, you think it might lead to targeted adverts.
However, on a macro level, when our data is aggregated it can be used in ways we could never imagine. For example, it can be used to train algorithms to make decisions that affect large groups of people. Decisions such as whether you receive state support, are successful in a job application or have a visa approved. These algorithms can be opaque and discriminatory, leaving us with little knowledge about how a decision was made. And, like historical colonialism, the effects are usually felt most strongly by those who are already vulnerable.
And that is before we get on to the damage data collection can do to the environment. Data requires processing by huge banks of computers (known as data centres), which use a significant amount of electricity and deplete the power supply for other uses. In the book, the authors cite the example of west London where the building of much-needed new homes has been constrained until at least 2035 due to a lack of electricity supply caused by the expansion of data centres in the area.
Globally, it is estimated data centres will use between 3 and 13 per cent of all electricity globally by the year 2030, compared to the one per cent they used in 2010. This electricity creates heat which needs to be cooled down using vast amounts of fresh water. Thames Water has already expressed concern that its water supplies are getting dangerously low and data centres are a key reason behind this.
We can only change things together and we need to help each other make these changes.
How to fight back
This all paints a very bleak picture, but Professor Couldry doesn’t want us to despair. He argues this future can be averted by a large, collective effort to resist data colonialism’s injustices. “We can only change things together and we need to help each other make these changes. This is what we try and offer in the book: a new vision to help people understand that it doesn’t need to go this way.”
To offer inspiration, Data Grab provides examples of individuals and groups who are resisting. In the US, 17 communities have issued bans against the use of facial recognition software by police. Workers across the globe are taking a stand and there is an increasing number of unions for companies like Google, Apple and Amazon. Gig workers are taking matters into their own hands, exerting pressure on governments to guarantee their basic rights. Some are even undertaking “account therapy”, which involves coaxing algorithms to behave in ways more favourable to workers and counter their exploitative effects.
Whistleblowers such as Edward Snowden and Frances Haugen have helped expose US surveillance apparatus and the willingness of big companies to put profit before the safety and mental wellbeing of their users. Some companies, such as Lush cosmetics, have closed down some of their social media accounts, and taken the financial hit for doing so, due to the harmful effects of these platforms.
Not all actions have to be on a large scale. As is noted in the book, “even putting your phone down for a couple of hours might be an act of defiance". Likewise, refusing to accept cookies when visiting a website might be a form of resistance – something which apparently so far only 0.5 per cent of users do.
Professor Couldry also outlines several alternative platforms which are focused on community rather than profit and can be used instead of mainstream apps. These are known as “federated platforms”. The best-known is probably Mastodon which is an alternative to X. Pixelfed can be used for sharing photographs and PeerTube is a federated video-sharing platform.
With our lives increasingly taking place online, we are giving away more data than ever. Maybe, as Professors Couldry and Mejias state, “in the long run, a life full of smart devices is not really smart at all.” Maybe this is the time to take a stand.
Professor Nick Couldry was speaking to Charlotte Kelloway, Media Relations Manager at LSE.
Resonance, Not Scalability
Interdisciplinary Workshop on Reimagining Democracy Essay Series
Over the past three decades, humanity made a fundamental error in spatial design—an error that makes it vanishingly unlikely that we can create positive conversation spaces for democracy. I’ve been trying to think about how we might correct that error; in fact, I have just completed a book on the topic. My book is written from the perspective of a social theorist who thinks mainly about data institutions and social order.
The error’s outlines will be familiar, even if how I describe it may not be. In essence, we’ve (inadvertently) allowed businesses to generate what I call “the space of the world”: the space of (almost) all possible spaces for social interaction and therefore for democratic practice. That didn’t happen because we asked businesses to design the spaces where democracy plays out; no one ever planned that. It happened because we allowed large corporations to design shadow spaces (we call them platforms and apps) with two key properties: First, these spaces can be controlled, indeed exploited, by these large corporations for their own ends, mainly profit. Second, these spaces mimic aspects of daily life sufficiently well that they bolt on to our actual social world under conditions largely dictated by these corporations (above all, the condition that we are incentivized to use them because everyone else is).
By allowing large corporations to promote these shadow spaces, humanity made three fundamental design mistakes that are now constraining our social world and our democratic futures. Our first mistake was allowing the creation of a space of spaces of unlimited scale to which everyone potentially could connect, without regard to the consequences of allowing unlimited feedback loops across all our activities within that space. The toxic results have been seen on every scale.
Second, we never even considered the possibility of designing and controlling the spaces in between our platform spaces. We neglected that possibility completely, allowing businesses simply to optimize engagement and the profit that flows from it, whatever the scale.
Third mistake: We let ourselves be driven by the value of unlimited scalability—exactly the wrong valuefor social and political design. In fact, a value that is orthogonal to how political theory has thought about the conditions under which democracy—or any nonviolent political life—is possible for millennia. Neither of the two main traditions of Western political theory—the Aristotelian idea of politics as a natural human activity on a relatively small scale or the Hobbesian idea of a social contract for societal security—ever imagined that politics could safely unfold on the scale of the planet or in smaller spaces of continuous interaction and unlimited playback and feedback.
If you’d asked anyone 30 years ago (political theorist or not) whether it made sense to build a space like the one that has emerged—linking together all possible social and political spaces and, what’s more, incentivizing feedback loops of engagement across it—they would have said, “No, don’t do it.” But we did it, and we need to actively think about what it might mean to dismantle the space we’ve built—or at least override it with different values and different design thinking.
We can’t erase the idea of platforms, let alone the internet, as a space of connection. Instead, we need a very different approach to rebuilding our space of the world. It’s a problem that we unwisely got into, but now we have no choice but to invent better solutions—solutions that are less risky for democracy. To start, we need to think about platform space in a completely different way, securing the “spaces-in-between” (the firebreaks, if you like) that limit flow and enable “friction,” as legal scholar Ellen Goodman puts it, and reducing some of the risks of toxic feedback loops (we can’t solve them all). Whatever their limitations currently, I believe that federated platforms, such as Mastodon, point in the right direction.
Second, and because we will have started to build spaces-in-between, we should trust more in the new possibilities those firebreaks protect: the possibility of discussion in spaces whose values and purpose align more with specific communities rather than just abstract business logics. Put in political terms, this means trusting more in subsidiarity and rejecting scalability as a guiding value.
Thirdly, this opens up the possibility of giving a greater design role to existing communities as hosts of platform spaces and to government and civil society, not as hosts (the risk of censorship is too great) but as general sources of subsidy for the infrastructure on which healthy spaces of social encounter and civic discussion depend. This aligns with what communications scholar Ethan Zuckerman has called an “intentionally digital public infrastructure.” All this means thinking about design differently by moving away from the mixture of narrow economic logic plus a roll of the dice that has characterized how today’s space of the world has unfolded. But that’s hard without a guiding principle. To give us one, I want to return to the principle of resonance that I tentatively introduced at last year’s International Workshop on Reimagining Democracy (IWORD). Then, I talked about it in perceptual terms: as basically the possibility of sharing with others the perception that, even if you don’t entirely trust each other or the government, you are all in various ways responding to broadly the same set of problems within broadly the same horizon of possibility.
What I hadn’t realized then is that the design choice that makes this possible is even more important than this shared perception. It’s that alternative design approach to how we configure large social space for which I now want to reserve the term “resonance.” In the physical world, resonance occurs when electromagnetic waves on multiple frequencies propagate across space and objects start resonating at the frequency in the sound source which is their natural frequency. That resonating doesn’t happen because this frequency is imposed on those objects or because a set of external priorities forces that particular frequency onto the space. It results from the interaction between the sound source and the properties of the objects themselves; this positive, non-disruptive outcome occurs without any external attempt to optimize for one solution. Yet while resonance builds from the natural frequencies of objects, today’s social media landscape seems to be built against our natural frequencies, undermining whatever helps democracy and our common interests. Last year at IWORD, science fiction writer Ted Chiang asked, “How do we stop AI from being another McKinsey?” In other words, why are we locked into seeing AI only in terms of what it can do for capitalism? The same point could be made in relation to the design of digital spaces and platforms: Why only think about them in a framework driven by profit extraction? Why is that useful for democracy? It’s not a rejection of markets to suggest that, in designing the spaces in which we live, we should be oriented by broader principles of what’s good for life in general, for democracy, and for making better collective decisions. That yields different priorities.
Let me list a few:
• Always build platforms and spaces to the smallest scale needed.
• Always pay attention to the spaces-in-between (or the firebreaks).
• Maximize variety and experimentation (the other side of the minimum scale principle).
• Trust communities of various sorts as the best context for platform use and development.
• And, because we are freed now from the business goal of scalability, don’t maximize people’s time spent on any one platform. Instead, do everything to encourage more connections between online spaces and between offline and online spaces—connections whose intensity actual communities have some chance of managing.
Do all this, and we might have a chance of fulfilling political scientist Elinor Ostrom’s principles for protecting the commons, which included maximal decisional autonomy at the local scale and protecting the boundaries between groups and spaces. This might also yield a modest but workable approach to the other spatial possibilities for redesigning democratic practice that digital technologies really do enable. For example, why shouldn’t populations forced by climate change to migrate have a say in where they can move and under what conditions? Why should it only be the receiving states that get a say? We need to find some technological solutions.
Fail to rethink the design of platforms, and I fear we’ll forever be condemned to mop up the mess that commercial platforms have generated, in pursuit of a societal challenge that they should never have been allowed to mess with in the first place.
A publication of the Ash Center for Democratic Governance and Innovation
Twenty Years of Media and Communications Research: from Media Studies to Media Ecology
LSE’s Department of Media and Communications celebrates its 20th anniversary this year, and is marking the occasion with the upcoming Media Futures Conference on 15-16 June. Here Nick Couldry, Professor of Media, Communications and Social Theory at the LSE, reflects on how the study of media and communications has evolved in the 20 years since the Department was founded.
Look back to conference programs of 2003, the year in which LSE’s Media and Communications department was founded, and the sense of discontinuity is strong. So many topics have since faded (telecentres, digital divide, reality TV). Priorities have changed, and the huge interdisciplinarity of perspectives that we now take for granted was largely absent.
Yet important things have endured. Contrary to breathless predictions, television did not die, even if prime time has shrunk and is now distributed across multiple streaming channels. Nor did radio die (quite the contrary) or newspapers (yet). Habits are, after all, more enduring than hype or futurology.
Nor have battles for the status of the media and communications field been entirely resolved. Although few people in rich societies believe that media in the extended sense (to include everything we do with our smartphone and other computing devices) are without implications for the feel and structure of daily life, that has not stopped established disciplines from continuing to operate as if media do not matter: from economics to political theory, from international relations to social theory, media and communications tend at best to be an afterthought, with notable exceptions such as the work of Manuel Castells and Judith Butler. Worse, there is still work (I won’t give examples) that presumes to talk about media as something in common experience without any attention to the extensive literatures in media and communications research.
Inside the field, some battles also lie unresolved. I remember the urgency with which arguments for the importance of religion in the field were in 2003 being then proposed at the International Communication Association, but twenty years later, there is still no ICA division or interest group focussed on religion, such is the field’s default secularism.
But those enduring patterns mask more fundamental change.
When the LSE Department was founded in September 2003, the shock of the World Trade Tower attacks two years earlier still reverberated. The need for some way of opening up ethical debate about the role of media as weapon, and the potential of media as a space for recognising others who are silenced in global media agendas, were high on our list of concerns. Roger Silverstone’s book Media and Morality was the fruit of that, as later has been Lilie Chouliaraki’s work at LSE (The Ironic Spectator) and in another way my own.
But that early recognition of the need for a media ethics that went beyond a discussion of journalistic codes in the years that followed grew into a veritable paradigm shift with ever more books over the past decade or so foregrounding ethics as their core question, controversially or otherwise. Think of such different books as Sherry Turkle’s Alone Together, Robin Mansell’s Imagining the Internet, or Mark Deuze’s Media Life (a rare positive take). Indeed one could argue that this expanding sense that something is ethically troubling about the media landscape was what began to connect communications research on media and the internet to previously distant work in legal theory, such as Julie Cohen’s synoptic book Configuring the Networked Self. The sense of a common cross-disciplinary topic about the nature of our media and information environment had emerged by the early years of the 2010s.
If one way of describing this shift, seen from the perspective of media studies’ older agendas, was ‘ethics’, a newer way of formulating it was in terms of ecology. While the topic of media ecology (and even sound ecology) had a longer history in North America, it absolutely was not a familiar way of framing what there was to talk about in media studies two decades back. But Roger Silverstone’s statement in the preface to Media and Morality that ‘global societies’ are facing an ‘environmental’ ‘crisis in the world of communication’ (2007: vi) sensed the direction of travel, even if for a media landscape that looked very different from today’s.
Just a few years after Silverstone, Julie Cohen’s call for a ‘cultural environmentalism’ that can help us see more clearly the problems with our growing dependence on computing infrastructures and platforms that continuously surveil us seemed both original and absolutely inevitable. By the start of the last decade, our sense of the scale and nature of what there was to be discussed about media had changed profoundly.
For one thing, it no longer made any sense to talk about media without also talking about the internet and the whole matrix of information and communication technologies in which legacy media like television, radio and the press are now embedded.
For another, old battles between political economy and cultural approaches to formulating the key questions about media now seemed quaint, because it was a profoundly changed political economy that, in full view, has driven the changes in how media culture feels.
The core thing that had changed was, of course, the rise of social media platforms and indeed the rise of platform-focussed capitalism generally: not as one phenomenon among others, but as a total transformation of the economic, social and technological space in which media survive or die, grow or wane. Our smart phones are portable ecologies of media inputs, but, more than that, they give access to, indeed demand our attention to, a transformed ecology of social communication.
Ten years from the beginning of this ecological (and ethical) turn in media communications and information research, it is versions of ecological thinking that are opening up new avenues for exploration in our field. To give some diverse examples: Amanda Lagerkvist’s work on Existential Media, Thomas Poell, David Nieborg and Brooke Duffy’s work on Platforms and Cultural Production, Shakuntala Banaji and Ramnath Bhat’s work on Social Media and Hate, the work by Deen Freelon and other political communications researchers on Disinformation, and Sarah Banet-Weiser’s work on popular misogyny.
Those are just a few examples among many of exciting new directions of research. But they show that, at this state in our field’s history, the starting-point has become thoroughly ecological, in a way that it was only subliminally twenty years ago. Our field has come a long way and yet, in a sense, it has taken just a small step towards addressing the growing challenges posed by capitalist communication platforms for our chances of living well together in the future.
This post represents the views of the author and not the position of the Media@LSE blog nor of the London School of Economics and Political Science.
It’s time to stop trusting Facebook to engineer our social world
As a recent US Senate hearing hears that Facebook prioritises its profits over safety online, Nick Couldry, Professor of Media, Communications and Social Theory at the London School of Economics and Faculty Associate, Berkman Klein Center for Internet and Society, Harvard University, argues that a public scrutiny and a tighter regulatory framework are required to keep the social media giant in check and limit the social harms that its business model perpetuates.
The world’s, and in particular the USA’s, reckless experiment with its social and political fabric has reached a decision-point. Almost a year ago Dipayan Ghosh of the Harvard Kennedy School of Government and I argued that the business models of Facebook and other Big Tech corporations unwittingly aligned them with the goals of bad social actors, and needed an urgent reset. Why? Because they prioritize platform traffic and ad revenue over and above any social costs.
Yet, in spite of a damning report by the US House of Representatives Judiciary Committee last October and multiple lawsuits and regulatory challenges in the US and Europe, the world is no nearer a solution. But for the case of Facebook, whistleblower Frances Haugen’s shocking Senate testimony last week confirmed exactly what we argued: that this large US-based corporation is “buying its profits with our safety”, because it consistently prioritizes its business model over correcting the significant social harms it knows it causes.
As Robert Reich notes, it would be naïve to believe that accountability will follow the public outcry. That’s not how the US works anymore, nor indeed many other democracies. Meanwhile Mark Zuckerberg’s response to the new revelations rang hollow. Of course, he is right that levels and forms of political polarization vary across the countries where Facebook is used. But no one ever claimed that Facebook caused the forces of political polarization, which inevitably are variable, only that for its own benefit it recklessly amplified them.
Nor, as Zuckerberg rightly protests, does Facebook “set out to build products that make people angry or depressed”: why would they? But the charge is more specific: that Facebook configured its products to maximize the measurable “engagement” that drives its advertising profits. Facebook’s 2018 newsfeed algorithm adjustment, cited by Haugen, was a key example. Yet we know from independent research that falsehoods travel faster, more deeply and more widely than truths. In other words, falsehoods generate more “engagement”. So, optimizing for “engagement” automatically optimizes for falsehoods too.
It is not good enough for Facebook now, under huge pressure, to claim credit for the “reforms” and “research” it conducted in earlier attempts to mollify an increasingly hostile public. Facebook can say, as Mark Zuckerberg just did, that “when it comes to young people’s health or well-being, every negative experience matters”, but its business model says otherwise, and on a planetary scale. It is time for that business model to be examined in the harsh light of day.
The problem with the underlying business model
In a report published a year ago, Dipayan Ghosh and I called this model the “business internet”. Its core dynamics are by no means unique to Facebook, but let’s concentrate there. The business internet is what results when the vast space of online interaction becomes managed principally for profit. It has three sides: data collection on the user to generate behavioral profiles; sophisticated algorithms that curate the content targeted at each user; and the encouragement of engaging – even addictive – content on platforms that holds the user’s attention to the exclusion of rivals. A business model such as Facebook’s is designed to maximize the profitable flow of content across its platforms.
If this sounds fine on the face of it, remember that the model treats all content producers and content the same, regardless of their moral worth. So, as Facebook’s engineers focus on maximizing content traffic by whatever means, disinformation operators – wherever they are, provided they want to maximize their traffic – find their goals magically aligned with those of Facebook. All they have to do is circulate more falsehoods.
Facebook will no doubt say it is doing what it can to fix those falsehoods: many platforms have tried the same, even at the cost of damping down the traffic that is their lifeblood. But the problem is the underlying business model, not the remedial measures, even if (which many doubt) they are well-intentioned. It is the business model that determines it will never be in Facebook’s interests to control adequately the toxic social and political content that flows across its platforms.
It is the business model that determines it will never be in Facebook’s interests to control adequately the toxic social and political content that flows across its platforms.
The scale of the problem is staggering. As recent Wall Street Journal articles detail, Facebook’s business model (and obsession with controlling short-term PR costs) push it to connive when celebrities post content that even Facebook’s rules normally ban, discount the impacts on teen girls’ self-esteem from Instagram’s image culture, misunderstand the consequences for political information when it tweaks its newsfeed algorithm, and fail in its own drive to encourage Covid vaccine take-up.
Some Facebook staff seem to believe that the Facebook information machine has become too large to control.
Yet even so, we can easily underestimate the scale of the problem. We may dub Instagram the ‘online equivalent of the high-school cafeteria’, as the Wall Street Journal does, but what school cafeteria ever came with a continuously updated and universally accessible archive of everything anyone said there? The problem is that societies have delegated to Facebook and other Big Tech companies the right to reengineer how social interaction operates – in accordance with their own economic interests and without restrictions on scale or depth. And now we are counting the cost.
A turning point?
But thanks to Frances Haugen, through her Senate testimony and her role in the Wall Street Journal revelations, society’s decision-point has become startlingly clear. Regulators and governments, civil society and individual citizens could consign the problem to the too-hard-to-solve pile, accept Facebook will never fully fix it, and allow the residual toxic waste (inevitable by-product of Facebook’s production process) to do whatever harm it can to society’s and democracy’s fabric. Or key actors in various nations could decide that the time for coordinated action has come.
Key actors in various nations could decide that the time for coordinated action has come.
Assuming things proceed down the latter, less passive path, three things require urgent action.
Facebook should be compelled by regulators and governments to reveal the full workings of its business model, and everything it knows about their consequences for social and political life. Faced with clear evidence of major social pollution, the public cannot be expected to rely on the self-motivated revelations of Facebook’s management and their engineers working under the hood.
Based on the results of that fuller information, regulators should consider the means they have to require fundamental change in that business model, on the basis that its toxicity is endemic and not merely accidental. If they currently lack adequate means to intervene, regulators should demand extended powers.
Equally urgent action is needed to reduce the scale on which Facebook is able to engineer social life, and so wreak havoc according to its whim. At the very least, the demerger of WhatsApp and Instagram must be put on the table by the US FTC. But a wider debate is also needed about whether societies really need platforms on the scale of Facebook to provide the connections on which social life undoubtedly depends. The time has passed when citizens should accept being lectured by Mark Zuckerberg on why they need Facebook to “stay in touch”. More comprehensive breakup proposals may follow from that debate. Meanwhile, analogous versions of the “business internet”, in Google and elsewhere, also need to be examined closely for their social externalities.
Some fear that the medicine of regulatory reform will be worse than the disease. As if the poisoning of democratic debate, the corrupting of public health knowledge in a global pandemic, and the corrosion of young people’s self-esteem, to name just some of the harms, were minor issues that could be hedged.
Something like these risks was noted at the beginnings of the computer age, when in 1948 one of its founders, Norbert Wiener, argued that with “the modern ultra-rapid computing machine . . . we were in the presence of [a] social potentiality of unheard-of importance for good and for evil”.
Nearly 75 years later, Wiener’s predictions are starting to be realized in plain sight. Are we really prepared to go on turning a blind eye?