Regulating content moderation on digital platforms is one of the key issues of our time. Hate speech, terrorist propaganda, covert manipulation of the public sphere, harassment of minorities, automatic discriminations: none of this is contemplated in digital utopias. And yet it is increasingly clear that yes, they make for a real problem, threatening our very ability to speak and act freely online. At the same time, an over-reaction to these very real issues can easily lead our democracies astray as well — hindering freedom of speech and users’ rights in the impossible quest for online purity and harmony.
A balance must be struck here, but it’s difficult to see where. The trade-offs are many, the rights to be kept in equilibrium fundamental. And the issue — how to regulate the web so that it actually ends up being a better place for all of us — is trans-national, global by definition. This is why we decided to ask some tough questions to David Kaye. As the United Nations’ Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, he is in a unique position to evaluate these complex issues with both insight and a global vision. Also, content moderation is precisely the topic of his forthcoming book, ‘Speech Police. The Global Struggle to Govern the Internet’. We discussed what shape this crucial struggle is currently taking over a Skype call.
Mr. Kaye, the book opens with a passage from political theorist Benjamin Barber, arguing in 1998 that either the web “belongs to us” or it has nothing to do with democracy. Fast forward to 2019, and the web appears to belong to Facebook, Google, Amazon, Tencent, Alibaba — not its users. We are now about to rewrite the rules of the game, internationally, and your book argues that “democratic governance” is essential: “It’s time to put individual and democratic rights at the center of corporate content moderation and government regulation of the companies” you write. And yet throughout the essay you also argue that many of the remedies that are being proposed for the “dark sides” of the web may further undermine democracy instead of strengthening it. What’s going on, and why should users be concerned? Why a book that goes by the sinister, Orwellian title of “Speech police”?
That’s a bunch of hard questions. Let me start with the title first. The title, “Speech Police”, is in many respects designed to be descriptive. We have all of these different actors now that are competing to police our speech online. You have the companies, you have governments, you have the public that is pushing for different kinds of constraints — we might think of them as social constraints. The digital age and social media in particular have made us focus more than we used to, certainly in democratic countries, on those who control what we can and cannot say. And that’s a big difference from the kind of environment we’ve lived in before. So, as for the quote from Benjamin Barber, ultimately what I want us to be thinking about is: how do we strive to protect the original promise of the internet, which was indeed a democratic space, a place where people had voice and where you had open debate? And that for me is the core of Benjamin Barber’s message — and it was like twenty years ago… My view is that we need to be taking a hard look at how governments regulate the internet and how the companies do it, so that we can move towards a space where we feel there’s democratic control of online space. I talk in the book about different possibilities for doing that, but so far I think governments and companies have not been succeeding in making that environment happen.
Since Brexit and Trump, we’ve been constantly reading that social media, by their very existence, “destroy democracy” — if not the whole of “civilisation”. Do you think these claims are evidence-based or not? And what do you make of them?
I would avoid the broad claim about social media “destroying democracy”. Those kinds of broad claims are the kinds of claims that lead to over-broad regulation, and regulation that tries to solve all social problems rather than just the problems that might be caused by online media. From my perspective, I want us to focus specifically on the things that we think are problems online. I’ve identified some of them — hate speech, disinformation, incitement to violence, things like that. And I think that each of these problems has certain kinds of solutions that we can address without undermining all the other values that we have as a democratic society when it comes to freedom of expression and privacy and so forth.
The global conversation around how to regulate a common space in which almost all humanity is connected truly is a crucial one. Are we having it in a healthy way though? Many digital rights organisations and academics denounce that most of the claims put forward in the media and by politicians are completely devoid of evidence, and more generally highlight that norms and rules are being written in response to moral panics, media sensationalism and prejudice — if not the interests of a dying, old media world that long looked at the internet more as an enemy than as a fundamental change in human communications.
That’s a separate problem that I’m not addressing in the book — the role that social media, and Facebook in particular, play in undermining the media. I think some claims are probably overblown. They definitely had an impact on traditional media, but the book is not about that. What I’m really trying to do is to get people to think not only about the specific problems of hate speech or disinformation, but rather about how to make decisions around those problems. Should those be problems that are resolved only by the companies? That heads toward a place where we’ll end up having corporate decision-making and profit-driven decision-making. So we don’t want that —although companies have responsibilities. We also don’t want governments to be making these decisions in ways that, like you said, are driven only by a sense of moral panic. They should also be driven by a question: how do we maintain and promote the original ideals of the internet? And my main concern there is that when governments, particularly in Europe, have been trying to address this particular problem, they’ve been doing it in incredibly sloppy, irresponsible ways. I get the motivation to do it. NetzDG is a well-intentioned piece of legislation that, either inadvertently or on purpose (doesn’t really matter), gives the companies more power to make decisions about what is German law. That’s not democratic. So we need to be stepping back, taking a breath and thinking: what makes sense? How do we want the decisions to be made here? To what extent do we want them made by the companies, which have an obvious responsibility to protect rights? To what extent do we want governments to rethink the role of traditional public institutions, like our courts, and make decisions about what is lawful and what is not? And once we make those decisions, I think the specifics around hate speech or disinformation will answer themselves, in a way, because they should be rooted in democratic principles — but overseen and constrained by democratic institutions.
Is Europe actually a role model here though? Yes, it internationally paved the way for a strong, solid privacy legislation with the GDPR. But considering, for example, the debate around the risks of the copyright Directive in terms of fundamental rights of users, or the dangerous rhetoric about “fake news” adopted by many European leaders (twisted by political leaders all over the world into a tool against the free press), can we still say that the EU is part of the solution, or has it become part of the problem too? In the book for example you speak of a sort of “liability plus” that would be imposed upon platforms even before the “illicit” or “harmful” content is actually posted. That sounds a frighteningly lot like a Chinese model of governance, rather than a properly European one — as it implies having upload filters for basically everything the government doesn’t like, in order to remove all ills from the online world and make it a sanitised, clean environment for a “harmonious” society… Is this a good policy posture for Europe?
When you look at what’s happening in Europe I actually think it’s hard to make a general claim, because when you drill down beneath the different policies and policymakers, you just see a lot of difference. So at a bureaucratic level, if you look for example at the European Commission, there is a lot of good people there, who are trying to get this right. But I think they have a lot of pressure from some governments to “eradicate” — yes, this is the word you hear regularly — hate speech or terrorist content; like there’s going to be some laser weapon to do it! I think that pressure, which is very political, is really problematic and you see it in different spaces. On the other hand, I’m not depressed about the situation. I think there’s hope. And the most recent form of hope is the French social media regulation that has just been released. I looked at the introduction, and it really uses language that is similar to the kinds of language that I’ve been talking about, and that many people in civil society have been talking about for the last several years. Which means you want to have plurality of media, diversity, human rights norms, you want to base your decisions on necessity and proportionality, and you want to have your courts involved in making decisions. I think that’s good. Seeing something like that gives me hope that there could be some constraint. On the other hand the French are also pushing for terrorist content directive (which forces digital platform to remove terrorist content within 1 hour, ndr). So, no government and no institution is monolithic, and part of my goal in the book is to try to encourage the good policy and the good workarounds around this.
In the last chapter of your book you detail some ideas to try and understand what “good” should mean here. It can mean, for example, decentralised decision-making practices, in which platforms much more systematically engage with local civil society activists and regular users in devising policy responses. It can also mean, you write, adopting “human rights standards as content moderation norms”. Can you speak about this propositive side of you essay? What is it that we should actively be fighting for?
The first thing I would say, both to government and companies, is how important transparency is. Transparency gets a bad rap, in a way, because it’s seen as a contentless approach — “just open it up and it will solve our problems! — and that’s not really my argument. My main argument is that, in the absence of real transparency about what the companies and what governments are doing, it’s very hard to have public conversations about the way forward and about what’s actually happening on the platforms. So what I really believe is that companies need to be transparent about both their rule-making and their decision-making. I think they need to create a case law where they expose in radically transparent ways what it is that they’re doing on specific claims around content. And that’s a kind of general transparency — it means publishing in a more granular way what they’re doing with content, and also the inputs into their algorithmic decision-making. But it also means being more transparent to the people who are making claims on their platform. So, when I flag content I often don’t get a reasoned response as to what the platform decided to do. So there needs to be just a much, much better approach on that. The second part of it though is that governments need to be much more transparent too. My book isn’t totally about this, but I mention it from time to time: governments — both democratic and authoritarian ones — are putting enormous pressure on the companies to take down content. I would say they are often — usually — doing it outside of their legal framework. So law enforcement calls up a platform and says: “take this content down”. Or maybe it comes through the courts, but that’s relatively unusual. There’s just very limited transparency around that. I mean, I think of a situation like Kashmir, where it’s pretty clear that India is putting an enormous pressure on Twitter and Facebook to take down Kashmiri activists and reporters’ accounts — but there’s very little transparency around that from the companies, and almost none from the governments.
You argue that the platforms themselves are well-intentioned, but — no matter how hard they try — they just can’t moderate so much content and grapple with such complex issues. These are legitimate and understandable concerns. And yet, time and again these same platforms just seem unable to perform even the basic policing you would expect, especially in places such as Myanmar where Facebook’s presence has basically been non-existent, even in the wake of serious calls for violence and genocide on its platform. Also, and in striking contrast, you explicitly mention “politically cross-checked” pages, which “tend to be high-profile or popular pages requiring more than one level of evaluation before the company takes action against them”. Do you think there is some kind of preferential treatment for political or opinion leaders more generally, compared to the way in which users are treated? And is this fair?
Yes on your first question. All platforms now have an explicit policy around newsworthiness. My concern around newsworthiness is that it’s important that what leaders are saying is publicised. But it also means that you have parallel standards: if a random user, like you or me, does something that the platform thinks violates the rules, it could be taken down — fairly quickly or not — and there could be some account action. If you’re president of the United States, or a member of the UKIP…
Or Matteo Salvini…
…then it might stay up, exactly — this is worldwide, it’s not just the US or UK. The companies need to be thinking a lot more about that particular standard, whether it makes sense. It might. Or maybe the approach should be something like, “well, the real problem with Donald Trump tweeting out some harassing content is that it gets shared so much, it becomes viral. So maybe there’s tools other than censorship, tools limiting his reach. Maybe it makes much more sense that the content is findable but not pushed”. There are all sorts of approaches. Then your second question is: is it fair? Well, it’s a question of “fair” according to whom. Is it fair for the public? I mean, there’s a decent argument that the public should know when leaders are racist. There’s a part of Trump’s Twitter feed that is extraordinarily illuminating — of course to intelligence services, but also to voters. And that’s not a bad thing. When we ask questions like that or answer questions like that, which we are all asking and answering, it depends on where you sit. And we should acknowledge that.
A recent op-ed on the New York Times by Facebook co-founder Chris Hughes re-ignited the debate around breaking up Big Tech, starting from Facebook. Can it be part of the solution to any of these problems?
There’s no question that this is part of the conversation now. Is it the answer? Maybe, but I think it’s not enough to talk about the break up. It is kind of a negative approach, and I don’t disagree with that. But it’s a reaction to what Facebook has become. And if you do break up, then what do you do next? I think that a real, honest conversation about it has to be that it’s not just about regulatory change on competition, but it’s also about the responsibility of governments to create an environment that is much more amenable to competition. That might mean tax incentives to new social media platforms. It may mean creating more socially-funded public social media. I mean, there’s all sorts of ideas that we haven’t even really considered in significant ways because the antitrust is only one part. Competition policy has to be followed with actual investment in pro-competition and pro-diversity approaches. And that means all sorts of assessment of the information environment, the media environment, and so forth. Even if you break up Facebook, which is Chris Hughes’s point, I’m not quite clear on how that has an impact on Facebook as a platform, putting aside Instagram and WhatsApp. How does that affect its rules, and its reach? I’m not sure it changes all that much, and I’m especially not sure it changes much in terms of the global platform. I mean, this has been one of my biggest questions to those pushing for break up. What is the American responsibility to ensure that brake up doesn’t cause real harm to users overseas? Because, you know: you break it, you own it. A US company has developed an extraordinary power in markets and jurisdictions outside of the United States; so now US policymakers have a definite responsibility to ensure that when they try to fix that problem in the United States, they don’t inadvertently cause massive problems outside the United States. I mean, it’s just a huge externalities problem that I don’t think anybody is really focusing on right now in the US.
I can clearly remember a time, before Brexit and Trump, in which a discussion was ongoing about how to change the fundamental governance of the Internet. Has this debate evolved, and does it have anything to do with what we’re talking about today?
One of the serious, serious problems right now is that these debates are happening country by country. There aren’t really that many people who are thinking about these problems with the specific perspective of the global reach of the platforms. This is no longer — even if it ever was, certainly not now — a problem that can be solved country by country. There’s definitely global elements to this. The problem is that the way the international system is organised, everything does come down to national jurisdictions. So, you can’t say to France or Singapore: “don’t regulate”. The most we can say is: “If you are going to regulate, you need to regulate according to fundamental principles of human rights law”.
That would be enough though. Why don’t social media adopt human rights law as the global standard? It wouldn’t be that difficult, would it?
Yes, and that is why I’m a little bit hopeful about the French proposal. Because, even if they don’t say it explicitly, they do put international standards upfront in their proposal. And there’s a positive sense there that they don’t want to get it wrong, that they — at least, those who drafted this proposal — understand that there’s some very significant value that one can get out of the platforms and the Internet, and we need to ensure that that part of it, that one about democracy, the economy, innovation, can be promoted while dealing with the negative stuff. And then the secondary part of this — and my hope is that governments like France would think this way — is when we do this regulating, we should make sure that we don’t inadvertently give support to authoritarians who would use our language in order to take action that is really detrimental to individuals and their rights worldwide. It’s just something that democratic governments have to do.
Facebook is finally opening up some data to some researchers. But it is also restricting access to it in other forms. And in any case, we still don’t have nearly enough of it, if we want policies around digital platforms to meaningfully be evidence-based. What should we do about this? And shouldn’t we promote and fund much more research — especially in democratic countries — to actually understand what is going on with online content, before regulating it?
Yes, absolutely. Across the board, you ask any researcher who works on Internet issues, particularly social media, and they will tell you the same thing: that it’s been virtually impossible to get full information from the platforms — and that needs to change. There is a way in which GDPR is making that harder, because the kind of information that researchers need might also be the kind of information that GDPR prohibits, or at least restricts, the companies from sharing. And so the companies have gotten their lawyers engaged, and are concerned that if they share information they could also be subject to liability. We really need to make sure that there are carve-outs in privacy protections that allow for confidential, at least, use of material by the platforms. I think that’s going to happen, and we need to find a way to make that accessible.
The first of Kranzberg’s laws recites, famously, that “Technology is neither good nor bad; nor is it neutral.” The same can arguably be said of digital platforms, which makes it problematic when it comes to establishing who’s actually responsible for content posted by their users. We used to conceive of them as tech companies, shielded from direct liability on user-generated content — both in the US and EU. Now, many would like to file them under the “media companies” label, so that they would be ultimately responsible for all content on their platforms. The recent UK White Paper on online harms, among others, is trying to find a difficult middle ground, for example through a “duty of care” that shifts responsibility from a reactive perspective (you have to act upon notification) to a proactive one (you have to act even absent any notifications). This is a very complicated debate, with serious consequences on free speech. In the book, you argue that “governments should be reinforcing intermediary immunity, not chipping away at it with content regulations”. Do you think the UK’s White Paper is actually a good starting point for a comprehensive, international approach to regulating all “online harms”? Should this be the way forward?
I still haven’t done a full analysis of the White Paper, but my main concern there is that even if you establish a “duty of care” — or when you might establish it — you have to be clear about what’s legal and what’s illegal. You can’t have a duty of care where you say “this content is harmful, but lawful”. This doesn’t provide much guidance to the companies in terms of what they should be doing with that kind of content. If we want to have democratic control of these decisions you still have to ultimately have democratic governments making decisions about what is lawful and what is not. The UK has a hard call to make. If they were to say that there is a “duty of care” for the platforms to address illegal content, I don’t think anybody would be concerned. As long as you clarify that it’s illegal, and you have your public institutions making the final call on legality, that’s totally fair. But if you impose an obligation on companies to address content that is not illegal but is in this kind of liminal space of harmfulness, then you’re asking for the companies to really scrub the content on their platforms — even for legitimate content. And that’s where the risk is. The real test for something like the White Paper and the British government moving forward is: how do they address that problem? How do they make those hard calls? How do they make sure that those hard calls are made by public institutions and not by companies acting alone?