Big Tech poses an enormous challenge to free speech — but we aren’t having the right debate about it.
Editor’s note, May 5, 2021: On Wednesday, a Facebook oversight board ruled that the social media service could retain its ban on former President Donald Trump following the insurrection at the US Capitol on January 6. The board also stated, however, that Facebook would need to either justify a permanent ban or eventually restore Trump’s account. The following conversation, which took place on April 20, addresses some of the deeper issues raised by Facebook’s ban.
America’s commitment to free speech is uniquely radical.
The US Constitution treats freedom of expression as the master freedom that makes every other possible. And our legal system reflects this view, which is why it has always been incredibly difficult to suppress or punish speech in this country.
But there has never been a consensus on how to implement the First Amendment. Free speech law has evolved a ton over the years, especially in the aftermath of revolutions in media technology. The birth of radio and television, for example, altered the information landscape, creating new platforms for speech and new regulatory hurdles.
Today, the big challenge is the internet and the many ways it has transformed the public square. In fact, if a public square exists at all anymore, it’s virtual. And that’s problematic because our communication platforms are controlled by a handful of tech companies — Twitter, Facebook, Google, and Amazon.
So what happens when companies like Facebook and Twitter decide, as they did in the aftermath of the insurrection on January 6, to ban the president of the United States for “glorifying violence” and spreading dangerous misinformation about the election? Is that a violation of the First Amendment?
The conventional response is no: Facebook and Twitter are private companies, free to do whatever they want with their platforms. That’s not wrong, but it is oversimplified. If the public square is controlled by a few private companies and they have the power to collectively ban citizens whenever they want, then doesn’t that give them the ability to effectively deny constitutionally protected liberties?
There are no simple answers to these questions, so I reached out to Genevieve Lakier, a law professor at the University of Chicago and an expert on the history of the First Amendment, to explore some of the tensions. Lakier believes our current debate about deplatforming — and free speech more generally — is too hollow.
We talk about why contemporary First Amendment law is poorly equipped to handle threats to speech in the internet era, why we don’t want tech CEOs arbitrarily policing speech, what it means to have private control of the mass public sphere, and what, if anything, we can do on the policy front to deal with all of these challenges.
A lightly edited transcript of our conversation follows.
Sean Illing
What does the law actually say about the right of private companies like Twitter or Facebook to censor or ban users at will? Is it legal?
Genevieve Lakier
It is definitely legal. The First Amendment imposes very strict non-discrimination duties on government actors. So the government isn’t allowed to ban speech just because it wants to ban speech. There’s only going to be a limited set of cases in which it’s allowed to do that.
But the First Amendment only limits government actors, and no matter how powerful they are under current rules, Facebook, Amazon, and Twitter are not going to be considered government actors. So constitutionally they have total freedom to do whatever they want with the speech on their platforms.
The only caveat here is that they can’t permit unlawful speech on their platforms, like child pornography or speech that violates copyright protections or speech that’s intended to communicate a serious threat or incite violence. Bun in those cases, it’s not the tech companies making the decision, it’s the courts.
Sean Illing
So why do you believe that our current legal framework is inadequate for dealing with free speech and tech platforms?
Genevieve Lakier
It’s inadequate because it rests on a false understanding of the speech marketplace. The best explanation for why we have a strict state action restriction on the scope of the First Amendment is the government is a regulator of the speech marketplace, so we want to limit its ability to kick anyone out of the marketplace of ideas.
Ideally, we want to give people who participate in the marketplace of ideas a lot of freedom to discriminate when it comes to speech because that’s how the marketplace of ideas separates good ideas from bad ideas. You couldn’t have an effective marketplace of ideas if people couldn’t decide which ideas they want to associate with and which ideas they don’t.
And that makes sense at a certain level of abstraction. But the world we live in is not the one where the government is the only governor of the marketplace of ideas. The whole public-private distinction doesn’t really map onto the world of today. If that was the world we lived in, the current rules would work fantastically. But as the platforms make clear, private actors very often are themselves governors of the marketplace of ideas. They’re dictating who can speak and how they may speak.
Facebook and Twitter are not government actors, they don’t have an army, you can leave them much more easily than you can leave the United States. But when it comes to the regulation of speech, all the concerns that we have about government censorship — that it’s going to limit diversity of expression, that it’s going to manipulate public opinion, that it’s going to target dissident or heterodox voices — also apply to these massive private actors, yet under the current First Amendment rules there is no mechanism to protect against those harms.
Sean Illing
I absolutely don’t want Mark Zuckerberg or Jack Dorsey or John Roberts deciding what kind of speech is permissible, but the reality is that these tech platforms are guided by perverse incentives and they do promote harmful speech and dangerous misinformation and that does have real-world consequences.
But if we want a truly open and free society, are those just risks we have to live with?
Genevieve Lakier
To some degree, yes. People love to talk about free speech as an unadulterated good, but the truth is that the commitment to free speech has always meant a commitment to allowing harmful speech to circulate. Free speech means little if it only means protection for speech that we don’t think is objectionable or harmful. So yeah, a society organized on the principle of free speech is going to have to tolerate harmful speech.
But that doesn’t mean that we have to tolerate all harmful speech, or that we can’t do anything to protect ourselves against harassment or threats or violent speech. Right now we have what’s widely seen as a crisis of speech moderation on these platforms. The platforms themselves are responding through effective self-regulation. But those efforts are always going to be guided by the profit motive, so I’m skeptical about how far that’s going to get us when it comes to sustainable speech moderation policies.
Sean Illing
Do you want the government telling Zuckerberg or Dorsey how to moderate content?
Genevieve Lakier
We might, as democratic citizens, think that our democratic government should have something to say about the speech that flows through the platforms. That doesn’t necessarily mean that we want Congress telling Jack Dorsey or Mark Zuckerberg what speech they may or may not allow. There’s a tremendous amount of disagreement about what’s harmful speech, or where to draw the lines, and you might not think Congress is in a good position to make those kinds of decisions.
Perhaps we want a diversity of approaches to content moderation across the platforms, and the government establishing a uniform speech code would undermine that. But at the same time the platforms are governors of speech, they’re the regulators of incredibly important forums of mass communication. And so I, as a democratic citizen who thinks the free speech principle is intended to facilitate democratic ends, want there to be more democratic oversight of what happens on the speech platforms.
Sean Illing
That sounds perfectly reasonable in the abstract, but what would “democratic oversight” look like in practice?
Genevieve Lakier
One way is to mandate transparency. To require the platforms to give more information to the public, to researchers, to the government, about how they’re making content moderation decisions, so ordinary citizens can assess if it’s good or bad, or what the effects of the policies are. That’s tricky because you’d have to think about what kind of information the platforms should be required to give and whether or not it would offer us any real insight. But I do think there’s a role for transparency here.
Alternatively, if we recognize that these private actors are playing such a tremendously important role in our public life, we could think about ways to make their decision-making more democratic or more democratically legitimate. So there have been proposals to create a kind of regulatory agency that would potentially collaborate with some of the platforms on developing policies. That might create more democratic structures of governance inside these platforms.
Sean Illing
What do you make of Justice Clarence Thomas’s recent suggestion that we should consider treating tech platforms like “common carriers” and regulate them like public utilities? Is that a good idea?
Genevieve Lakier
This is an idea that people on both the left and the right have suggested in recent years, but that had always been viewed as very constitutionally problematic. So it’s interesting that Justice Thomas thinks a common carrier platform law would be constitutional.
Practically, it’s hard to see how a common carrier regime would work. Common carrier laws— which prevent private actors from excluding almost any speech — work well when applied to companies whose job primarily is moving speech from one place to another. But the social media companies do a lot more than that: one of the primary benefits they provide to their users is by moderating content, to facilitate conversation, to flag news or videos as relevant, etc.
Common carrier obligations would make it difficult for the companies to perform this service, so the common carrier analogy doesn’t really work. Justice Thomas also suggested the possibility of subjecting the platforms to public accommodations law. Now, that seems more viable, because public accommodations law doesn’t prevent private companies from denying service to customers altogether, it merely limits the bases on which they could do so.
Sean Illing
Going back to your point about transparency, even if a company like Twitter formulated what most people might consider transparent and responsible speech policies (which I doubt, but let’s just grant that possibility), I don’t see any way to enforce it consistently over time. There is just too much ambiguity and the boundaries between free and harmful speech are impossible to define, much less police.
Genevieve Lakier
Regulation of speech is always tricky, and the scale of the speech and the transnational scope of these platforms creates enormous challenges. The best we can do is to try and develop mechanisms, appeals, processes, reviews, and transparency obligations where the platform’s disclosing what it’s doing and how it’s doing it. I think that’s the best we can do. It won’t be perfect, but it would be good to get to a system where we have some reason to believe that the decision-making is not ad hoc and totally discretionary.
Sean Illing
Are there free speech models around the world that the US could follow or replicate? A country like Germany, for example, isn’t comfortable with private companies deplatforming citizens, so they passed a law in 2017 restricting online incitement and hate speech.
Is there any room for an approach like that in the US?
Genevieve Lakier
The First Amendment makes it extremely difficult for the government to require platforms to take down speech that doesn’t fall into some very narrow categories. Again, incitement is one of those categories, but it is defined very narrowly in the cases to mean only speech that is intended, and likely, to lead to violence or lawbreaking. Hate speech is not one of those categories. That means that Congress could make it a crime to engage in incitement on the platforms but that would apply only to a very limited range of speech.
Sean Illing
I know you believe the platforms were justified in banning Trump after the assault on the Capitol in January, but do you also believe that we should punish or censor public officials for lying or perpetrating frauds on the public?
Genevieve Lakier
I think politicians should be able to be punished for lies, but I also think it’s very dangerous because the distinction between truth and lies is often difficult or subjective, and obviously democratic politics involves a lot of exaggeration and hyperbole and things that skirt the line between truth and lying. So we wouldn’t want a rule that allows whoever’s in power to silence their enemies or critics.
But on the other hand, we already prosecute all kinds of lies. We prosecute fraud, for instance. When someone lies to you to get a material benefit, they can go to jail. When prosecuted, the fact that you used speech to effectuate that fraudulent end is not a defense. As a subspecies of this, we criminalize election fraud. So if someone lies to you about the location of a polling place or they give you intentionally incorrect information about how to vote, they can go to prison.
Political lies that constitute fraud or that contribute to confusion about an election are in a narrow category of their own. So for example, I think President Trump’s lies about the outcome of the election are a species of election fraud. When used to achieve material benefit or electoral benefit where he’s going to use those lies in order to justify staying in power, that feels like the kind of lie that perhaps we want to include in our election fraud category.
Sean Illing
I just can’t imagine political speech, which is very different from commercial speech, ever being controlled that way. A border case like Trump inciting violence might be as clear-cut as it gets, but what about propaganda? Sophistry? And the innumerable forms of bullshit that have always constituted democratic politics? Democracy is a contest of persuasion and politicians and parties are always going to deceive and manipulate in pursuit of power and money.
That’s just baked into the democratic cake, right?
Genevieve Lakier
So I agree that there’s a category we could call election fraud that maybe we feel okay prosecuting and then there’s ordinary political bullshit that maybe we don’t. But I’m going to throw a question back at you, because I think that there are cases on the border that are really difficult. For example, what about the lies that Trump told his supporters in order to keep contributing to his fund after the election?
To me, that looks like fraud. If it wasn’t a politician, we would just call it classic fraud. But in the political domain, we call it something else. I’m not entirely sure about to think about this, but it’s an interesting case.
Sean Illing
Oh, no doubt it’s fraudulent, but I guess my point is that a great deal of politics is fraudulent in the same way, though it’s usually less overt than Trump’s hucksterism. Parties and politicians and special interest groups lie and peddle half-truths all the time. There’s so much bullshit in our political system that Trump appealed to a lot of people precisely because he was so transparently full of shit, which says quite a bit about where we’re at. The idea that we could ever meaningfully punish lying strikes me as fantastical.
Genevieve Lakier
What’s so interesting is that when you look at commercial speech cases, it’s not even controversial to prosecute false advertising. There’s no debate that false advertising is outside the scope of First Amendment protection.
The justification for that is often that the person who’s selling you the commercial good has information about the good that the consumer doesn’t have and cannot get, so if they tell you it will cure bad breath or whatever, you have to trust them. When there’s a clear imbalance in knowledge and access between the speaker and the listener, the court says it’s okay to prosecute lying.
One approach I’ve thought about, though I’m not sure it would work, is when a politician is lying about something that the member of the public has no way of checking or verifying either on their own or through public sources.
One of the reasons that the lies about the election were so damaging is because the people who were listening to those lies, they didn’t have any way of knowing whether this was or was not happening. I suppose they did though, they could rely on other news sources. But it was very difficult for them to verify what was happening in the black box of the election machinery.
So yeah, I agree that lying is an intrinsic part of democratic politics, but I also think that there are certain kinds of lies that are very difficult to respond to just through the ordinary marketplace of ideas. A huge challenge moving forward will be navigating these kinds of questions in a rapidly changing landscape.
0 Comments