Following the mob violence at the U.S. Capitol last week, Facebook suspended President Donald Trump from its platform, and Twitter followed suit shortly thereafter.

Over the weekend, Google and Apple removed Parler, a social media platform widely used by conservatives, from its app stores. Then, Amazon suspended Parler from its web services Sunday evening. Now, many Americans are voicing their concerns over the power such platforms have to limit free speech. 

Klon Kitchen, director of the Center for Technology Policy at The Heritage Foundation, joins the show to explain why Twitter and Facebook say they banned the president, and why Google, Apple, and Amazon are actively suppressing Parler.

Kitchen also explains what laws and reforms are needed to keep the power of technology giants in check. 

We also cover these stories: 

  • Democrats file an impeachment article against Trump, citing the Jan. 6 unrest at the Capitol.
  • Sen. Sheldon Whitehouse, D-R.I., asks that Sens. Josh Hawley, R-Mo., and Ted Cruz, R-Texas be removed from the Senate following their attempts to object to the certification of the 2020 presidential election.
  • The social media platform Parler sues Amazon after being suspended from Amazon Web Services. 

“The Daily Signal Podcast” is available on Ricochet, Apple PodcastsPippaGoogle Play, and Stitcher. All of our podcasts can be found at DailySignal.com/podcasts. If you like what you hear, please leave a review. You also write us at letters@dailysignal.com.

Virginia Allen: I am joined by Klon Kitchen, the director for the Center for Technology Policy at The Heritage Foundation. Klon, welcome back to the show.

Klon Kitchen: My pleasure. Thanks for having me.

Allen: So, big tech companies and free speech is definitely on everyone’s mind this week after rioters forcefully entered the Capitol and caused a full lockdown of the building last Wednesday.

Twitter suspended President Donald Trump’s Twitter account first for just 12 hours, but then shortly after they reinstated his account, Twitter banned President Trump from the platform permanently.

Could you just walk us through the reasoning for why Twitter said that they decided to permanently ban the president from their site?

Kitchen: Sure. And as I do, a bit of context here, conservatives have been concerned about the way the tech industry has been handling itself and conservatives’ reach online for a long time.

And these companies have utterly failed to win and to hold public trust. That reality is really making an already difficult situation right now even worse because of things like this, where this type of action was taken.

I’ll walk through a little bit of the rationale behind it, but frankly, the intentions of these companies just aren’t broadly trusted. And so regardless of the relative merit, of the individual case or not, a lot of people are just unhappy and we’re all trying to navigate that. So, that’s just context and I just kind of want to recognize that reality.

In terms of what happened with Twitter’s permanent ban of President Trump, as you said, following the events last week at the U.S. Capitol, he was put on a 12-hour suspension. At the point where he came back up at the end of that suspension, he posted two tweets.

The first tweet said this: “The 75,000,000 great American Patriots who voted for me, America first, and make America great again, will have a giant voice long into the future. They will not be disrespected or treated unfairly in any way, shape, or form.”

He then followed it up with a second tweet that said this: “To all of those who have asked, I will not be going to the Inauguration on January 20th.”

Now, Twitter initially determined that these tweets did not in and of themselves violate their community standards, and so they were left up. And one could read through those and say, like, “Well, I mean, like, you may agree or disagree with kind of whatever it is the president said, they don’t seem particularly kind of provocative.”

But Twitter also decided that what they were going to do was they were going to kind of watch the online conversation around these tweets.

As they did that, they began to observe a growing number of users that were citing or interpreting [the] president’s tweets, those two particularly, as calling for or justifying additional violent political rallies or actions. And that even specific and deliberate planning, including dates and locations, was starting to evolve and be discussed and organized on Twitter and on other social media platforms.

And that’s actually, if you take a look at the Twitter’s announcement of the ban, they actually kind of obliquely referenced that rationale in the very first paragraph, where they talk about how these tweets are being interpreted and used.

So, this led Twitter to ultimately close the president’s account and to begin working with federal law enforcement and to leverage existing counterterrorism partnerships with other tech companies to begin sharing threat information as this challenge kind of grew.

Now, one final point. I think it’s fair to say that while what I just described as the immediate context for this decision, I think it also is true that this decision is the culmination of four years of sparring between the president and Twitter. The most recent thing may have just been the straw that broke the camel’s back.

But what we’re seeing now, even a news reporting that’s coming out today, is that these threat streams that seem to have motivated Twitter are now coming into public view and the FBI is now actively warning against some of this.

Allen: Klon, I mean, of course there are instances where individuals should be removed, should be blocked. We obviously don’t want to see people calling for violence, using these platforms to provoke violence.

But following that logic that essentially Twitter looked at those two initial tweets, they saw that the tweets themselves didn’t violate their standards, then to remove the president because of conversations happening off of those initial tweets—it seems to me that it would make more sense just to focus on removing those individuals who are the ones that are spurring on the violence instead of removing the leader of the free world from Twitter.

Kitchen: Yeah, I think that’s a fair critique. I think one of the challenges is, one, they absolutely did pivot and start focusing on the people who were actually doing the planning. That was clearly happening.

I think two, one of the things that we have to recognize is it’s broadly believed that there were things that the president has said over the course of time and specifically most recently that actively instigated some of the activities that we observed last week.

I mean, one of the president’s most ardent supporters, Sen. Lindsey Graham, on Wednesday night came to the Senate floor and said exactly that. As did [Rep.] Liz Cheney, as did [Sen.] Tom Cotton, as did a whole host of other people.

So, when Twitter gives the justification that, “Hey, we’re concerned that leaving these tweets up … ”—meaning the president’s tweet—“ … could inspire or lead to additional violent action,” whether we agree with them or not, they certainly weren’t the only ones who were making similar judgments.

And it wasn’t only left or liberal commentators making that judgment, it was also, frankly, those on the political right, even in the U.S. Senate.

Allen: We have seen that a lot of conservative users are saying that many of their followers have all of a sudden disappeared. Do we know kind of the situation there? What’s happening with mass amounts of people, it seems to be, being taken off the site? And then who is making those decisions for who gets to stay and who’s taken down?

Kitchen: Yup. So, almost near simultaneously to the action against the president, we began observing the loss of thousands of “conservative”—I’m putting that in air quotes because it’s just impossible to know—but conservative followers on Twitter, particularly those who followed conservative influencers.

And what we have discovered and what’s occurring is that as Twitter and other social media organizations began investigating this threat stream about online kind of anti-government planning, they realized that a key part of what was kind of fomenting that and also kind of spreading it were these users and networks associated with the QAnon conspiracy.

And so it was decided to increase the scrutiny on those accounts, the QAnon accounts, in an effort to, one, mitigate their ability to kind of coordinate and plan, and then, two, also mitigate the kind of spread of their content.

What that mitigation effort largely consisted of was any account that showed activity that looked like it was either spam or bots—these are kind of automated fake accounts, when I say bots, that’s what I mean—that they immediately moved those accounts into a kind of purgatory status where the owner of the account had to verify that they were in fact a real person. And if they could verify that, usually it’s typically with just a phone number, then they would be reinstated.

Well, when those accounts were put into that purgatory state, they dropped off the follower count and they went quiet.

So what happened was it turns out that on the political right side of Twitter and social media, a fair number of our conservative influencers are also followed by these large QAnon networks. And so when they got taken down, you saw a corresponding decrease in the number of followers because those networks, those spammer networks, those bot networks, were being taken down. And the people who are making that decision are the social media companies themselves.

Allen: OK. So at the end of the day, that, in some ways, it’s not a bad thing to have these sort of bot platforms being pulled down.

Kitchen: I mean, look, the influencers will make an argument that it was bad for them because it cuts down on the kind of propagation of their … So, there are plenty of people who are saying, “Look, I’m being suppressed here. You’re preventing the spread of my [message]”—meaning the conservative commentator’s message—“by taking down, you know, my followers, by, you know, thousands at a time.”

And technically speaking, that’s not wrong. That is one of the consequences. But we’ve often encouraged both sides of the political, or have been encouraging these social media companies to take down these types of networks. And particularly in the context where these networks are being leveraged to plan anti-government violence. It seems like a rational, or at least defensible choice.

Allen: Of course. Let’s talk for a minute about Facebook. So Facebook, they really didn’t hesitate. Right away after the events at the Capitol on Wednesday, following, of course, first, President Trump’s rally, and then the full lockdown of the Capitol, Facebook announced that they were banning [the] president through at least the end of his term. What do you think about Facebook’s decision?

Kitchen: I think any content moderation decision, including who gets to stay on a platform and who doesn’t, is almost always going to be debatable. There’s no ironclad logic that will satisfy everybody.

I, of course, think that Facebook is a private company and has a choice in whom it will allow to use its platform in the same way that The Heritage Foundation has a choice in who it will allow to use its website to post articles.

I think it hurts the conservative cause. Even if you believe that big tech has waited with bated breath to constrain conservative speech, if you believe that, if you have drawn that conclusion, well, then one of the best things that conservatives can do to combat that is not give them golden justifications for taking that kind of action.

And the reality is, is that over the last several weeks and even longer, a number of golden opportunities have been presented. And so it makes it really difficult to discern the motives behind any one thing, but this is the sticky situation.

It goes back to my point at the beginning. The fact that the public does not trust these companies is decisively bad for the country in these types of moments.

Allen: Klon, I think one of the reasons why so much of the public doesn’t really trust these companies is because we’re often seeing, it feels like, these standards applied unequally. That you’ll have groups on the right more frequently targeted than those on the left.

Is that a fair assessment? I mean, have we seen Twitter and Facebook apply any of these standards to leaders on the left on their platforms? Have any prominent liberal lawmakers been fact-checked or banned?

Kitchen: So, No. 1, what you’re describing is exactly the issue. And look, at one instance, these companies are absolutely hypocrites, and we’ve certainly said that to them to their face.

I mean, at the point where you have the ayatollah of Iran able to call the state of Israel a cancer on Twitter and that gets left up, but then other actions are taken, I think they’re completely open to legitimate claims of hypocrisy.

And I have been at the forefront of engaging them on those issues, as Heritage has been more broadly, and I think that’s legitimate.

At the same time, it is also true that actions are taken against Democratic and left-leaning users online, and that that’s not always known.

A recent example is that there were a host of liberal and left-leaning groups that were labeled or checked, or even kind of brought down, on the night of the Georgia Senate election because they prematurely declared victory.

Now, that happened, and it happened at a fairly significant scale, but frankly, the left just isn’t as organized as the right is when it comes to this. They’re so fractionalized along different identities that they often aren’t able to kind of make the noise that our side is able to do when action is taken against them online.

Allen: So then what actions should be taken in order to kind of make sure … the standards are applied equally and evenly? Are there certain laws that need to be passed?

What needs to happen in order for us to be able to move forward in a way that the American people can begin to see that these companies are taking some responsibility for their actions and are actually applying their standards evenly?

Kitchen: That’s the big question. One, there is no silver bullet. Two, there are some very practical things that can be done.

So, in terms of beginning to directly address the lack of confidence in these companies, I think there has to be some demonstrated accountability. I think these companies have to demonstrate some accountability.

And I think one of the best ways to begin that—this is not going to be decisive, it’s not sufficient, but it is required—and that is reform of what’s called Section 230. This is what’s called intermediary liability protections that these companies enjoy.

We’ve written a paper about it. I’ve got it on our website. I’m sure you can link to it. But it’s “Section 230: Mend It, Don’t End It,” in which we lay out a number of very specific changes to that law that we think would bring it into compliance with its original intent. And that would begin to provide the type of accountability that we’re talking about here.

Allen: How do you think the rest of the world is kind of viewing this situation? Because as you mentioned earlier, we’ve seen other very dictatorial, radical, violent leaders in countries like Iran and China who are allowed to remain on platforms like Twitter and yet President Trump has been pulled off.

So, what are we kind of hearing from the international community about how this is being viewed?

Kitchen: Well, China specifically is watching this and they are then explaining to people how this is yet another piece of evidence that our system of government is unsustainable and that the alternative that they’re offering is a better way.

And what they say is, like, … the Chinese government says they can promise the wealth of capitalism coupled with the stability of authoritarianism. And they identify technology as being the kind of key mechanism for realizing both of those two promises.

And so they look at our democracy and without a doubt, our shared experiment in ordered liberty right now feels very disordered. It feels very messy. And in one sense, it is. In another sense, I would encourage conservatives to also understand that our nation has faced challenges like this before, that we have some underlying institutional stability that allows us to see our way through it.

We certainly need to exercise prudence and caution and charity, frankly, toward one another, but we can get through this.

I don’t buy into apocalyptic notions about where the nation is right now. But that would certainly be something that the Chinese and the Russians and the Iranians and the North Koreans and all kinds of other foreign bad guys out there would have us believe. And I think that is neither safe nor justified.

Allen: It certainly flies in the face of who we are as Americans that one of our foundations is that of free speech. And even with private companies where the First Amendment doesn’t directly apply to them, there’s no doubt that free speech is something that is really sacred for Americans in that social media space and is still thought that it should be guarded and protected.

So, how do we go about promoting that free speech online?

Kitchen: Look, our system of government is not made for efficiency, right? Our system of government is made for stability, and there are some internal tensions that are kind of baked into the cake. And one of those tensions is when we talk about valuing freedom of speech, that’s absolutely the case. But we mean that not just for individuals, but also entities like these companies.

The decisions that they’re making about who will and will not be on their platform, those are free speech issues on their part, again, in the same way that we would never want the government to come in and tell Heritage that we have to post certain materials on our website, that Heritage doesn’t want to post for whatever reason.

But the reality is that Section 230 governs that activity, Heritage’s online presence, as much as it does Facebook’s online presence.

And so the rules that we make for the one are going to apply to the other. And so there’s just this inherent tension that we’re going to have to navigate.

Now, there’s room for improvement. And that’s why we wrote the paper on Section 230 and we made the recommendations that we had.

But … everyone should understand that fixing Section 230, one, is going to be requiring a scalpel, not a broad sword. And two, that that is in no way, shape, or form a silver bullet because we have these baked intentions within our society that are going to persist long after Section 230 is dealt with.

Allen: Klon, one of those sort of free market solutions that we’ve seen in recent years arise are other social media platforms. And there’s one in particular that I’d love to chat about for a few minutes called Parler. And that’s known to be very friendly to conservatives. It doesn’t censor posts. 

Many on the right, they’ve been using Parler to share their ideas. And after the events of Wednesday and President Trump’s removal from Twitter, we saw a real flood of conservatives moving over to Parler, but Apple, Google, and Amazon have removed or taken away services from the Parler app. 

So, let’s start with just talking about Apple and Google. Can you just explain what exactly is going on there with more or less kind of their censorship almost of Parler?

Kitchen: Well, OK. So, this goes back to the investigation that came on Twitter in terms of the anti-government violence planning. And as all of that was being mapped out, it was discovered that a huge center of gravity for that activity was actually occurring on Parler. And so as the various platforms became aware of that reality, they began to take action. 

So, when Twitter learned that people were using Twitter that way, Twitter had moderation rules that allowed them to kind of take action and remove those accounts. 

Parler is deliberately kind of making its brand that we’re the “no moderation” social media company. Well, what that means is that they actually don’t have, and weren’t, moderating [of] any of that violent content or that content justifying violence on their platform. And they also did not have any type of a mechanism for users to report that kind of content.

Well, not moderating violent content and not having a mechanism to report violent content actually violates the rules that Apple and Google have set for being hosted on their app stores. And the reason that they have those rules is because if someone were to use Parler’s app to successfully plan and conduct a violent act, well, if Apple and Google were aware of that activity, but allowed it to persist, then they could be held liable. 

So, that is one of the reasons why … Google just kicked them off completely as soon as they discovered it. Apple gave them 24 hours to adopt new moderation controls, and Parler failed to do that. 

And so both of them out of, I mean, frankly, a self-preservationist motivation, said, “Well, OK, well, we’re not going to assume the liability of this. If you’re not going to take action, we’re not going to host you.”

It was similar when Amazon made its decision. Amazon has similar rules. Amazon provides the cloud infrastructure that supports Parler. And for the same rationale of not wanting to be held liable for the violent content on Parler, [Amazon] said, “Listen, you either take care of this or we’re going to no longer host your services.” Parler failed to take care of this, Amazon dropped them. 

And now, subsequent cloud service providers for the exact same reasons are not willing to take them up.

So, the bottom line here is that it’s not as though other social media companies didn’t have the idea that Parler was offering of trying to be kind of a free speech zone, meaning like a zero-moderation zone on social media. It’s just that by becoming that, by choosing that business model, two things happen. 

One, you tend to be a pretty gross place. There’s lots of stuff that shows up. Some of the worst stuff on the internet ends up being on your platform because you’re not moderating. And not a lot of people want to go there. 

And then two, you expose yourself to these types of existential liabilities. And this was always going to be something that Parler faced, and at the point where that inherent challenge intersected with the ongoing anti-government violent planning, that just became the straw that broke the camel’s back for these other companies.

Allen: So, then, taking all that into consideration, as a tech policy expert, what is your assessment of Apple, Google, and Amazon’s actions here?

Kitchen: Well, so, looking at just the facts and not trying to discern intentions, the fact base that’s laid out there in terms of their concerns about liability, I mean, that’s legitimate, it’s discernible, it’s clear. That’s true. 

I mean, just imagine for a second if we found out that there was an app on the Google and Apple app store that … let’s say it was a Saudi Arabian messaging app. And we found out that al-Qaeda used that app to successfully plan an attack against the United States and that Apple and Google knew that that activity was going on on that app and didn’t take action against it. What do you think would happen to those companies?

Allen: Yeah, it wouldn’t end well.

Kitchen: No, right? I mean, certainly they would be dragged before Congress, rightly, and grilled and asked, “Why did you allow this activity to happen?” But then there would be very real legal liabilities that they would be exposed to, and rightly so. Well, this is exactly that kind of scenario, right?

Allen: Yeah.

Kitchen: That doesn’t mean that I’m letting them off the hook for their hypocrisy or for the thousands of other dumb things that they have done or ways that they have treated the conservatives. I’m not denying any of that. But in this specific case, over the last six days, the things that are being asserted and the fact pattern that is being laid out would seemingly justify at least some of these actions.

Allen: Klon, this is obviously such a complex issue. There’s so many layers here and various facets to the situation. 

But for individuals who are just kind of looking at the situation, and I think specifically for conservatives who are feeling really overwhelmed and just kind of watching so quickly how it feels like all of these social media companies have very, very quickly, it sort of feels like are intentionally pushing those on the right out, pushing them off. And I think people are kind of wondering like, where does this stop? 

I mean, are all of conservatives going to essentially be thrown off of these social media platforms? I mean, how concerned should we kind of be about where this leads and where it’s going to end?

Kitchen: Well, I think real concerns are justified in terms of, even if what’s going on right now is completely legitimate, it is a valid concern to be worried that these concerns could be expanded to include much broader types of conservative content that we would have a real problem with, right? 

So, I am very sympathetic to that concern. And it’s something that we at The Heritage Foundation are obviously mindful of and pushing back on. 

We’re wading into this conversation, trying to be kind of the adult in the room by recognizing the realities that we’ve been talking about up until this point and recognizing the very real negative kind of overreach that could follow all of this.

But while we engage in that, I’m often telling myself two things. One, well, don’t give them any excuse, right? So, really be smart about how I’m operating. But then two, understand that there’s some inherent risks to the way we’ve organized our society, where these companies and individuals have rights, have freedoms. And sometimes those rights and freedoms are bumping into one another. 

Now, that doesn’t mean that the status quo is the best possible way. In fact, I feel like the status quo is probably unsustainable. And so we need to be thinking very carefully about how we allow these companies to play a role in our society, to what degree, if any, we need to impose some type of a constraint on them, but we also need to understand the full consequences of any constraints that might be placed on them. 

And that again, our underlying conservative political philosophy understands that there is no perfect solution, it is always about trade-offs. But I think it’s time to start thinking more carefully about what trade-offs may be warranted in this modern context.

Allen: Klon, we really appreciate your expertise on this issue. How can our listeners follow your work and keep up with all the research and the work that you’re doing on this issue?

Kitchen: Yeah. Thank you. Well, I mean, I try to be available on podcasts like this. They can go to the Heritage website and do a simple name for my search. It’s Klon … Kitchen, just like the room. You’ll see, I think most of everything I produce, whether it’s published somewhere else or not, there. 

I have a newsletter, a weekly newsletter called the Kitchen Sync, … where I just kind of comment and give updates on the latest tech policy and news. You can sign up for that, I think, on the Heritage website as well. And yeah, I think that’s a good way. 

You can also follow me on Twitter, if that’s your thing, @klonkitchen. And yeah, I’m happy to engage as I can.

Allen: Great. Klon, thank you so much. We really appreciate your time today.

Kitchen: My pleasure.