Skip to main content

Parler Executive Responds To Amazon Cutoff And Defends Approach To Moderation

caption: Amazon cut off Parler from its Web hosting service, knocking the social media site offline.
Enlarge Icon
Amazon cut off Parler from its Web hosting service, knocking the social media site offline.
Getty Images

Parler calls itself a "conservative microblogging alternative" to Twitter and "the world's premier free speech platform."

But it's been offline for five days, and possibly forever, after Amazon kicked Parler off of its Web hosting service.

Founded in 2018, Parler is a favorite of right-wing extremists and supporters of President Trump. It has few restrictions on what users can post, attracting people who say they are being censored by Twitter and Facebook. And it says it collects less data on users than other social media companies.

Last weekend, Amazon pulled the plug, saying it found messages on Parler "that clearly encourage and incite violence" and that Parler's plan to use volunteers to remove this content wouldn't be sufficient.

Parler filed a lawsuit against Amazon, saying Amazon "blindsided" the company by abruptly cutting off service and that Amazon "never expressed any concerns" with Parler's moderation system before last weekend. Amazon responded by saying Parler "demonstrated unwillingness and inability" to take down "content that threatens the public safety, such as by inciting and planning the rape, torture, and assassination of named public officials and private citizens."

Parler's chief policy officer, Amy Peikoff, says the site's goal is freedom of speech.

"We are trying to allow for maximum freedom of expression consistent with the law," she tells NPR's Steve Inskeep.

In an interview on Morning Edition, Peikoff defended Parler's approach, saying the best counter to misinformation "is more information." Here are excerpts of the interview:

What responsibility, if any, does Parler take for the content on your site?

Our community guidelines were clear that we would not knowingly tolerate criminal activity on the site. We were trying to avoid using a system in which we would scan every piece of content with an automated algorithm. And so what we had was a community jury system in which any person on Parler could, of course, report a piece of content. We had a reporting mechanism. The report goes into a jury portal and we had a bunch of volunteer jurors who were adjudicating these cases. And then the verdicts would come down, the content would get removed as appropriate.

We should be frank that a lot of people migrated to Parler because they felt they could not lie as freely as they wanted to on the other social media platforms.

I wouldn't put it that way. I wouldn't put it that way. Not because they said they want to lie. Now, maybe there are some people, of course, we've had some people come over who were bad actors and then would just tell all kinds of lies and everything else. I think everybody's got that. But people came over, some of them, not all of them, but some of them came over because they thought that they were being treated disproportionately unfairly on other sites and then, yeah, did come over.

Do you take as a company any responsibility for not just calls for violence, but just obvious inciting lies about a stolen election?

No, you know, I don't think that lies in and of themselves are inciting. So within a certain context, you could say that certain lies are. We could talk about, for example, whether that one speech that President Trump gave while the events at the Capitol were still going on. And, you know, whereas he ended the speech with, "go home in peace." But a lot of us found it not very convincing, given all of the preamble at the beginning.

You could say, OK, in that context, he's telling certain lies and that that could be seen as a further incitement given the ongoing activities on the ground of the day. So I see what you're getting at. But in terms of just lies themselves, and can you say that lies themselves are inciting in the real world? No, when you're dealing with misinformation we think the best [antidote] is more information.

In Amazon's response to your company's lawsuit, they quote a number of posts on Parler. ... There are specific calls to violence, calls for a civil war starting on Inauguration Day, urging people to form militias, urging people to "shoot the police," urging people to hang specific public officials. This is just a partial list. When you read that list and know that it came across to people on your company, what do you think about?

I mean, I don't want it there, obviously. But again, the question is, what mechanism do you use to detect and then remove that content? And the model that we had, as I said, as time went on through November and into December, when we started making these changes, we realized that we need to do more. And we were making those changes and we were in discussions up with Amazon.

They dropped this on us on Friday afternoon and we were telling them what we were doing and that we were willing to do more. And we were starting even to program a bit of A.I. to figure out how we could use A.I. consistent with our mission over the weekend. And we'd started tagging some content that way. So we're definitely amenable to this. Nobody wants this on their platform.

There is plenty of this content, or at least there was, on Facebook and Twitter as well. All of them. And you know, and I've heard from ex-policy people from other platforms saying that the challenges are everywhere. Even when you do use A.I., it's not going to be 100% perfect.

No, we don't like to see it. It expressly violates our guidelines. And then the challenge is how best effectively to remove it and to make sure that our platform is designed not to encourage the sorts of sentiments that would lead to that type of content being posted in the first place.

Editor's note: Amazon is among NPR's financial supporters.

Jan Johnson and Danny Hajek produced and edited the audio interview. [Copyright 2021 NPR]

Why you can trust KUOW