‘Mend It or End It’: Taking a Closer Look at Section 230

Bill Walton /

If you’ve ever had a post flagged on or removed from social media, you need to know about the arcane sounding Section 230 of the Communications Decency Act. 

In this episode of “The Bill Walton Show,” Klon Kitchen, the director of The Heritage Foundation’s Center for Technology Policy, joins Walton to take an in-depth look into Section 230—called “the 26 words that created the internet”—and how Big Tech and social media companies have squandered the public’s trust by abusing its privileges.

Read a portion of the transcript, lightly edited, below or watch the full episode above.

Bill Walton: Welcome to “The Bill Walton Show.” Last week I joined a long and growing list of people and organizations who’ve had their ideas censored by one of the social media companies. YouTube decided that a show that I did with Dr. Jay Richards about COVID-19 did not meet its, quote, “community guidelines.”

Well, I can try [to] cite chapter and verse about how the show presented a well-researched and reasoned argument about the social, economic, and emotional costs of the lockdowns, and especially what are better alternatives to protect Americans, [but] today I’d rather dig into the larger issue.

What gives YouTube and the other social media companies the right to choose which should and should not be part of public debate? Why do they have the power to stand between us and our First Amendment rights?

The answer is complex and understanding it gets right at the heart of their power.

It’s something called Section 230 of the [Communications Decency Act]. We all need to understand this thing called Section 230 and what it should or should not be doing to protect or interfere with our rights of free speech.

With me to explain Section 230 and how to fix it is my frequent guest and friend Klon Kitchen, director of the Center for Technology Policy in the National Security and Foreign Policy Institute at The Heritage Foundation. Welcome, Klon.

Klon Kitchen: Hey, it’s great to be here.

Walton: So, you just posted something on the Heritage site—which I highly recommend everybody read—entitled “Section 230Mend It, Don’t End It.” First, let’s dig into it. What is Section 230?

Kitchen: Yeah, I’ll do this as quickly and as less nerdy as I can. It’s a part of the statute, as you mentioned, called the Communications Decency Act, and Section 230 is a particular portion of that act that lays out liability protections for internet companies.

And the brief background on it is in the early days of the internet and in the mid-’90s, Congress decided that it wanted to free websites to be able to remove some of the worst things on the internet from their websites, things like pornography and all kinds of defamatory language and that kind of thing.

And so what they wanted to do was create a protection—called Section 230—that provided a liability protection for these companies, if they were to remove that content from their platforms so that they wouldn’t be in fear of being sued for bridging people’s free speech rights.

So the original intent of Section 230 is laudable. And it’s easy to appreciate: “Let’s keep the internet from becoming the worst part of itself.” However, in the subsequent decades, multiple courts at the state and federal level have interpreted those protections very, very broadly, and have essentially equated them with free speech in and of itself.

So for a piece of law that was intended to help websites remove awful material from the web, from the internet, it has since been used to allow, just as a couple of examples, a revenge pornography website that was devoted to posting nude images without the consent of those in the pictures.

Message boards have successfully defended themselves using Section 230 when they knowingly facilitated illegal activity. Websites that have facilitated—or at least made easier—child sexual exploitation materials. All of these things have been litigated, and ultimately, protected by arguing that Section 230 allowed them to pursue these practices. So it’s been a real problem.

Walton: This is 1996, it was part of the Telecommunications Act. And it was Chris Cox and Ron Wyden, Republican and Democrat, who were concerned about, I guess it was a lawsuit, Stratton Oakmont v. Prodigy Services.

And Stratton Oakmont—which the firm in “The Wolf of Wall Street” is based on—sued Prodigy and won $200 million because Prodigy hadn’t been, I guess, I don’t remember the basis for the claim and the award, but somehow Prodigy lost, Stratton Oakmont won, and they wanted to do something about it.

So they wrote something in that says, “You’ve got to filter … offensive materials online—obscene, lewd, lascivious, filthy, excessively violent, harassing materials.” Absolutely, we want to block that, and then there’s this word: “or otherwise objectionable.”

Kitchen: Exactly.

Walton: And it’s those three words that people have driven a truck through.

Kitchen: Yeah, that’s right. There’s a second part about taking actions to enable or to make available information, content writers, or other technical means to restrict access. So that second part has also played a key part in much of the current political conversation.

So you opened at the beginning regarding how you’ve been treated on one of these platforms, and it’s that “otherwise objectionable” language that really has enabled that type of increasingly politically-motivated content moderation.

And that’s what’s been a real problem here recently.