'Teach Me' Interview Series on Freedom of Speech Online

Mon, 01/25/2021

I am Elsbeth Magilton, the executive director of the Nebraska Governance and Technology Center. In addition to my administrative work for the center, I do my own research primarily in space law – but when it comes to the amazing work our faculty is doing at the intersection of law and technology, I don’t know much besides that I wish I knew more. In this series I’m sitting down with our faculty and fellows for short interviews on their work.

In this post I asked Professor Kyle Langvardt about his work on the first amendment and internet platforms. As you can imagine January 2021 is busy time to be a scholar in this subject and the issues have never been more relevant. Curious what my non-lawyer friends wanted to know I asked them to send me their questions for Kyle and I’m excited to share those here.

Thanks for answering my questions Kyle! First, let’s just get this out of the way – is a social media platform banning or suspending a user a first amendment issue?

No.

Concise, I love it. OK, even if the user is a government official?

No.  The First Amendment doesn’t even kick in when the censor is nongovernmental.

 

So even though the first amendment doesn’t apply to private companies, are there issues in letting them decide what speech to allow and what speech not to allow? 

The platforms can do what they want.  Section 230 of the Communications Decency Act makes clear that platforms can’t be held liable on account of their decision to block material they consider objectionable—and that’s true whether the material is constitutionally protected or not.  And even if Section 230 is repealed, most lawyers who work in this area assume that platforms have their own First Amendment right to censor content.  In theory, this is similar to a newspaper that chooses which letters to the editor to print, or a cable provider who chooses which channels to carry.  So in legal terms none of this raises any immediate issue. 

The policy concerns, however, are enormous.  The first concern is that our society has begun to rely on centralized private censorship as a primary tool to check harmful speech.  Of course we’ve always had private censors of one sort or another – TV networks bleeping out bad language, newspapers deciding what’s fit to print, etc. – and mass media has always been concentrated.  But we’ve never had this degree of concentration, and at such a massive scale.  And in the United States, we’ve also never had entities that censor personal communications in the way that today’s giant platforms do.  It’s a necessary evil in the immediate term, but it also makes for a sudden and disturbing cultural shift toward a paternalistic approach to public discourse.

The more acute concern is that one of the apex platforms might someday fall into the hands of a person who wants to use it for outright evil purposes.  We haven’t seen that yet.  Facebook and Twitter’s misdeeds have more to do with sloppiness, selfishness, cluelessness.  They’re not out to undercut national security, or sabotage an election, or suppress factual information.  But why should we assume that will always be the case?  We need to plan for these eventual risks now, in the same way that drafters of a constitution would try to head off long-term risks of tyranny.

How can the government regulate what online platforms allow without violating the first amendment?

Section 230 is so protective of platforms’ decision-making in this area that we haven’t really seen courts address the issue.  But it’s safe to say that the First Amendment will impose serious limits on the government’s power in this area no matter how the courts cut it.  Beyond that, it’s uncharted waters.  It’s not obvious how the First Amendment principles we know today should translate in such a novel context—and that’s assuming that the underlying rules themselves don’t evolve over time as they always have.

Let’s say for the sake of argument that Facebook’s content moderation practices are “speech,” just like a newspaper’s editorial policy.  This is the mainstream view.  Well, if that’s the case, then the existing law says that the government will run into strict scrutiny whenever it tries to regulate Facebook’s “expressive” choices.  It’s a tough standard, and the government usually can’t satisfy it.  But that doesn’t tell us much, because sometimes the government can satisfy it.  Let’s say, for example, that the government outlawed certain egregious forms of partisan content discrimination by massive platforms in the immediate run-up to an election.  Would the public interest in avoiding intentional electoral sabotage be viewed as compelling enough to overcome Facebook’s speech interests?  Would we view things differently if TikTok was the new Facebook?  The existing law can’t answer these kinds of questions.  And there are many more questions like these.

What I’d really prefer to see – both from a constitutional and policy perspective – is some effort to look for content-neutral ways to mitigate social risk.  If you’re on Twitter, for example, and you try to retweet an article a couple seconds after you see it, Twitter asks you if you’d like to read the article first.  This applies across the board, no matter whether it’s an article about the election or an article about The Bachelorette.  It’s neutral in the same way that a noise ordinance is neutral. 

The First Amendment looks much more kindly on these kinds of policies than on policies that discriminate, and for good reason—they create fewer opportunities for the regulator to dominate and distort the discourse.  Maybe if we tempered the viral, compulsive dynamic that characterizes the largest online spaces, there would be less need for tech giants to censor so much content on a selective basis. 

 

A lot of my friends (admittedly, via the Instagram stories question feature) asked about how the first amendment may apply to misinformation online. Could the government step in and ask platforms to fact check and/or delete blatantly false posts?

As a general matter I’d say no.  Falsehoods aren’t categorically outside the First Amendment’s protection.  But false advertising is unprotected, and defamation is unprotected, and there are various other exceptional cases, such as incitement, that tend to overlap with falsehood.  The government can usually regulate those, and maybe those categories will be widened someday in light of the changes that online speech has produced.  But that doesn’t necessarily mean the government could require a platform to identify and remove those kinds of speech.  The danger then is that the platforms would censor way to much content to avoid getting crosswise with the law.  And the requirement would probably be invalidated as a prior restraint.  As for mandatory fact-checking, the problem there is that it would compel the platforms themselves to speak, and that’s generally a big no-no.

Can the government treat different platforms differently based on their size and societal reach? Speech on Facebook goes a lot further, a lot faster, than on a platform like (the still very popular) Reddit.

I expect that is how things will eventually balance out constitutionally.  It makes basic sense to take a special interest in the speech policies of platforms whose decisions have the most impact.  And we should be especially focused on the social impacts of viral speech, and on the aspects of platform architecture – including size, potentially – that accelerate virality.

There’s also some reason to treat the largest platforms’ First Amendment claims with more skepticism.  Facebook has billions of users.  It holds itself out as a forum for open discussion of nearly all topics, without regard to quality.  It has a “Supreme Court” – I’m talking about the Oversight Board – to handle takedown appeals.  At some point the analogy between a platform and a newspaper begins to look rather overextended, and the factors that contribute to that impression tend to correlate with size. 

So even if it’s mainstream today to say Facebook has strong First Amendment protections for its content governance practices, I expect that those protections will eventually be discounted somehow to reflect a more realistic view of the platform’s role.

 

What is the one thing you wish everyone in the country understood about the first amendment on the internet?

That the First Amendment and the freedom of speech are separate concepts.  The First Amendment is a law, and the freedom of speech is a value.  Just because someone has restricted your speech doesn’t mean the First Amendment is in play.  But the converse of that statement is also true:  making sure the government follows First Amendment law is no longer enough to secure the freedom of speech as a social value.

 

Professor Kyle Langvardt joined the faculty in July 2020 as a member of the Nebraska Technology & Governance Center.  He is a First Amendment scholar who focuses on the Internet’s implications for free expression both as a matter of constitutional doctrine and as a practical reality. His written work addresses new and confounding policy issues including tech addiction, the collapse of traditional gatekeepers in online media and 3D-printable weapons. Professor Langvardt’s most recent papers appear in the Georgetown Law Journal, the Fordham Law Review and the George Mason Law Review.

Tags: Interviews

Teach Me Freedom of Speech online