An Attempt to Assess Section 230

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This text, part of Section 230 of 47 US Code, is often called “the 26 words that created the the Internet”. It’s this specific section that the US Supreme Court is being asked to examine in multiple cases. There are two questions arising from that. First, what will SCOTUS decide? Second, what should it decide. We can’t address the first, so we’ll have to focus on the second.

The Google case that’s already been argued is a narrow example of Section 230. The assertion here isn’t that Google is responsible for YouTube content, but that it’s responsible if it decides, by any means to promote specific content that turns out to be outside traditional constitutional protections. That raises what I think is the key point in all of this, which is that this shouldn’t be a question of “everything is protected” or “nothing is protected” under Section 230.

CNN’s view attempts balance, and at least lays out the issues. It also identifies a basic truth that reveals a basic untruth about the opening quote. These 26 words didn’t create the Internet, they created social media. Finally, it frames in an indirect way the question of whether social media is simply an extension of a real-world community or something different. That leads us into the slippery world of the First Amendment.

Freedom of Speech, which is what the First Amendment covers, doesn’t mean that anyone can say anything they want. The well-known limitation regarding yelling fire in a crowded theater is proof that the freedom doesn’t extend to areas where public safety is involved. Most also know that if you say or write something that is both untrue and harmful, it’s a form of defamation, and you might be sued for it. That means that exercising your freedom of speech and uttering a falsehood can’t interfere with someone else’s reputation or livelihood. There are also legal protections against speech that’s deemed “hate speech.” Free speech has limits, and those limits can be enforced.

Except, maybe, online, and that’s where the issue of whether social media is an extension of the real world comes in.

If Person A says something that’s criminally or civilly actionable, but yells it out in a vast wilderness, it’s unlikely they’d be held accountable even if someone overheard it. Similarly, saying the same thing in a small gathering wouldn’t likely be prosecuted unless they were uttering an invitation to join a criminal conspiracy or the “gathering” was one open to a wide range of people and ideas. Suppose you uttered a defamation to a reporter? Suppose you characterized an ethnicity or gender in a negative way in a group of people you didn’t know? It seems like many of the exceptions to free speech are exceptions that relate to the social context, and that’s why it’s important to decide what social media is.

You can create a social-media audience in a lot of ways, from a closed group where people are invite-only and where the topic is specifically identified ahead of time to a completely open audience like that theater someone could be charged for yelling “Fire” in. It’s not clear whether everyone who used social media would understand the scope and context into which their comments were introduced. That alone makes it difficult to say whether a given utterance should be considered “free speech.”

Then there’s anonymity. Do you know who is posting something, or do you just know who they say they are? Some platforms will allow you to use a “screen name” that doesn’t even purport to identify you, and I don’t think any popular platform actually requires solid proof of identity. Redress against the person who uttered something isn’t possible if you don’t know who they are.

Finally, there’s “propagation velocity”. Generally, people are more likely to get a serious penalty for libel than for slander, because the first of the two means the offending remark was published and the latter that it was spoken. Spoken stuff is gone quickly, published stuff endures as long as a copy exists. If there’s harm, it endures too.

Opponents of Section 230 believe that immunizing social-media companies from actions regarding what they publish, but don’t create themselves, has made the platforms a safe harbor for abuse of free speech. Supporters of the section believe that a social media forum is simply a virtual form of the crowd on the street corner, which orators have addressed from soap boxes since the dawn of our Constitution.

What’s right here? Let’s start by looking at what, IMHO, is clearly wrong. It would be wrong to say that a social media platform is responsible for everything that every person on it says. To me, that clearly steps across the boundary between Internet forums and the real world and applies a different set of rules to the former.

I also think it’s wrong to say that social media is responsible for policing the sharing of posts within a closed community that people join if they accept the community value set. To me, that steps across the line between such a community and a party where people discuss things among themselves. Same rules should apply to both.

What is right, then? I think that if somebody wants to share a post, that post has to be subject to special moderation if it is shared outside those closed communities. You can’t yell “Fire!” in a crowded theater, nor should you be able to in a crowded Facebook. Meta should require that any broadly shared post be subject to explicit screening.

It’s also right to require the same thing of posts that earn a social media recommendation. If a social-media player features a post, they’re committing some of their credibility to boost the post’s credibility, and they have to take ownership of that decision and accept the consequences of it. This is where Google’s search case comes into play IMHO. Prioritizing search results via an algorithm is an active decision that promotes the visibility of content, and I think that decision has consequences.

I also think it’s right to require place special screening requirements on any posts from sources that have not been authenticated as representing who they claim to be. That identity should be available to law enforcement or if required in discovery in civil defamation lawsuit. Social media may not be responsible if a user defames someone, but they should not offer the users a level of anonymity that’s not available in the real world.

Is there any chance the Supreme Court is going to do something like this? Many of the justices are of my own generation, so it’s unfair (I think) to assume they’re all Luddites. However, there’s no question that my own views are colored by my own technical bias and social experience, and there’s no question that in the end what’s going to matter here is what the law says, which I can’t judge as well as they can. Might the law not be up-to-date in an Internet world? Sure, but many people and organizations probably think that the law should be updated to represent their own views better. There’s no law at all if everyone gets to write their own, and if the law is at fault here, we need to address changing it formally, not claiming it doesn’t apply.