The One Law That’s The Cause Of Everything Good And Terrible About The Internet

And the push to change it, for good or ill.
Isabella Carapella/HuffPost

At the beginning of June, my colleague Luke O’Brien became the target of an internet mob after he identified the New York woman behind @AmyMek, a massively popular Twitter account known for anti-Muslim, far-right-wing rantings that were endorsed by President Donald Trump. O’Brien was hit by a wave of harassment from her followers, including numerous death threats.

Twitter took minimal action against the perpetrators while suspending O’Brien for a joking response he made to one of his harassers. In the midst of this harassment campaign, which is ongoing, O’Brien asked me whether Twitter could be held legally accountable for failing to adequately address harassment on its site. He didn’t know it at the time, but I have some history with this question.

On Aug. 10, 1997, the Drudge Report published a story that my father, journalist Sidney Blumenthal, who would start work as a senior adviser in the Clinton White House the next day, had a documented history of spousal abuse that included court records in Boston. The story was entirely false — concocted by a group of conservative political and media figures and fed to Matt Drudge with no sourcing. This literal bit of fake news didn’t remain confined to Drudge’s website. Drudge had recently struck a deal with AOL to exclusively republish his content, which it immediately did for this story. My parents sued AOL. But, as they soon learned, it’s near to impossible to hold online platforms liable for content someone else published on their site.

So my simple answer to O’Brien was no. Twitter cannot be held legally liable for anything that appears on its platform. Nor could Facebook or YouTube be held responsible for the harm done to Sandy Hook parents by conspiracy-monger Alex Jones, who was banned from those digital platforms on Monday. That’s because of a short provision in a 1996 law that gives online intermediaries immunity from liability for any third-party content posted to or hosted on their platforms. The provision, known as Section 230, has become the legal building block for much of what we cherish about the internet today, and proponents of the law believe that the internet ecosystem of Google, Facebook, YouTube, Twitter, Reddit, Craigslist, Tumblr and so on would not be tenable without it.

Supporters of the law claim that any tiny deviation from total legal immunity would destroy the internet and suppress free speech. They consider any changing of Section 230 akin to amending the First Amendment.

The few critics of the provision, including lawyers, advocates and victims, argue that it enables discrimination, harassment and other threatening and criminal behavior. They have brought court cases to try to change jurisprudence that granted vast immunity under the law, have argued for a legislative fix and tried to motivate platforms to change their own rules. In April, Trump signed legislation passed by a bipartisan coalition in Congress to amend Section 230 by denying legal immunity to online platforms that could aid sex trafficking.

Their efforts are part of a growing realization that the internet is no longer a babe that could be strangled in the cradle by regulation; rather, it’s the home of the wealthiest, most powerful corporations in the world. And those efforts contributed to the slow-moving reconsideration of the lax regulatory structure that has enabled online platforms to become unfettered corporate monopolies and make money off the harassment that takes place on their sites.

The Good Samaritan Act

This story of Section 230 begins with Jordan Belfort, the cocaine- and quaalude-fueled investor whose life of criminal fraud was depicted in the Oscar-nominated 2013 film “The Wolf of Wall Street.”

In 1994, an anonymous user alleged on a financial forum on the Prodigy bulletin board service that Belfort’s firm Stratton-Oakmont manipulated the initial public offering it managed for Solomon-Page Group to Stratton-Oakmont’s benefit. Belfort sued the anonymous author as well as the online platform for defamation. (He’d admit five years later that he did, in fact, manipulate the offering.)

A defamation lawsuit filed by Jordan Belfort, the "Wolf of Wall Street," caught the attention of a congressman and began the process that created Section 230.
A defamation lawsuit filed by Jordan Belfort, the "Wolf of Wall Street," caught the attention of a congressman and began the process that created Section 230.
David Howells via Getty Images

Prodigy argued that, as a library or a bookstore can’t be found liable for defamation in all of the books they own or sell, it shouldn’t be held liable for user posts on its platform. The New York Supreme Court disagreed because Prodigy actively moderated posts to try to remove foul language and other objectionable content, such as child pornography. By acting to remove some bad content, the court determined, Prodigy was responsible for all content.

Chris Cox, then a Republican congressman from California, learned about the Stratton-Oakmont v. Prodigy decision from a newspaper on a cross-country flight and thought the decision was unfair. The ruling appeared to say that an internet platform could only be immune from liability if it did not moderate its content at all — not for spam, harassment, pornography or anything else. What kind of internet would that be, he thought.

And so Cox teamed up with Ron Wyden, then a Democratic congressman from Oregon and now a senator, to add an amendment to the Communications Decency Act that was working its way through Congress.

Section 230 stated, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It explicitly said that the legal immunity was meant to promote First Amendment principles of free speech in online discourse and to prevent direct, or collateral, censorship by the state, by private actors, or by platforms refusing to host speech they feared could provoke a lawsuit. Cox and Wyden argued the law would also protect “good Samaritans” who wanted to remove objectionable content, like Prodigy, from liability concerns. Cox and Wyden’s amendment was titled the Good Samaritan Act.

Congress passed the Communications Decency Act in 1996. Though the Supreme Court would find most of the bill unconstitutional two years later — it “impaled itself on the First Amendment,” internet legal theorist Lawrence Lessig wrote — Section 230 survived.

The Courts

Olivier Sylvain, a Fordham Law School professor and a critic of the broad reading of Section 230’s immunity provision, doesn’t see the legislative text of Section 230 as “creating a blanket immunity” for online sites from liability. Instead, he believes the law should be read as providing immunity from liability to sites “that take good faith efforts to take down objectionable material.”

But it was the courts, Sylvain argued, that read the provision very broadly and interpreted it as a blanket immunity for intermediaries.

After the Oklahoma City bombing by the far-right terrorist Timothy McVeigh in 1995, an anonymous user on an AOL message board posted an offer to sell merchandise with slogans that praised McVeigh (“McVeigh for President 1996”) and the bombing (“Visit Oklahoma ... It’s a BLAST!!!”). He attached the home phone number for Ken Zeran, then an entrepreneur living in Seattle. Unsurprisingly, Zeran faced a torrent of harassment and death threats.

The first major Section 230 court case stemmed from an anonymous AOL poster who offered merchandise glorifying the 1995 Oklahoma City bombing but gave another man's name and phone number.
The first major Section 230 court case stemmed from an anonymous AOL poster who offered merchandise glorifying the 1995 Oklahoma City bombing but gave another man's name and phone number.
Jeff Mitchell US / Reuters

Zeran repeatedly asked AOL to take down the content, which it did, but the messages continued to reappear. He ultimately sued AOL for negligence for allowing the content to continue to be posted after repeated notifications. The U.S. Court of Appeals for the 4th Circuit, however, ruled in 1998 that AOL could not be negligent because Section 230 “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” Essentially, if you run an online platform, you are not responsible for anything that happens on it, whether you are warned or not.

My family’s introduction to Section 230 overlapped with the Zeran case.

Two months before the Drudge Report posted the story about my father, it had closed a deal with AOL. The platform had agreed to pay Drudge a $3,000 monthly retainer to promote his gossip content to all 8.6 million of AOL subscribers. The hire, as AOL called it in a press release, would open “the floodgates to an audience ripe for Drudge’s brand of reporting” and would make “Matt Drudge instantly accessible to members who crave instant gossip and news breaks.” It was Drudge’s only source of income at the time.

My parents sued. “If it were writing on a clean slate, this Court would agree with plaintiffs,” Judge Paul Friedman of the U.S. District Court for the District of Columbia wrote. However, the enactment of Section 230 and the subsequent Zeran decision provided AOL with immunity from any liability for anything someone else published on their platform. (HuffPost was owned by AOL from 2011 to 2017, when AOL merged with Yahoo to become Oath.)

The court’s wide interpretation of Section 230 led to immunity so sweeping that it protects Airbnb from housing discrimination lawsuits. It shields revenge porn sites like TheDirty that post user submissions, any site hosting non-consensual pornography and companies like AOL from being responsible for defamation by writers that it exclusively pays.

Congress’ initial provision, and the courts’ wide interpretation of it in the following years, gave birth to the social web and the digital platform monopolies that we have today.

“We were living in an age where people were talking about the internet like it was a utopia. The problem with utopias is that they are really, for lack of a better word, lies,” Mary Anne Franks, the co-founder of the Cyber Civil Rights Initiative, told HuffPost.

“What happens when Congress tells all these corporations, all these intermediaries, ahead of time that nothing’s ever going to happen to you?” said Franks, who is also a University of Miami law professor. “You’ve really got to ask what kind of corporation is going to spend the money or the resources or the time on developing anything like a robust response to harassment when they don’t have to.”

Online Civil Rights Violations

Brianna Wu was one of three women targeted for abuse and death threats by the misogynist Gamergate community in 2014.
Brianna Wu was one of three women targeted for abuse and death threats by the misogynist Gamergate community in 2014.
Joanne Rathe/The Boston Globe via Getty Images

As this online ecosystem has risen, so has organized online harassment. The last 10 years have seen numerous examples of digital mobs targeting people, largely those from historically marginalized groups.

This form of harassment is not only made possible by the design of the social web, it is also encouraged, Franks said. The immediacy, anonymity, ability to bring together disparate communities and the gamification of behavior (likes, retweets, upvotes), she argued, has transformed a mass of bystanders into harassers.

For years, major online platforms failed to properly respond to this wave of harassment, as the “good Samaritan” nature of Section 230 says that they could. Often targets could get the attention of the intermediaries only if they had some kind of prominence or a big enough soapbox. Take HuffPost reporter Jesselyn Cook, who was sexually harassed and targeted with threats from a pro-Donald Trump Facebook group.

Cook received hundreds of friend requests and messages from users of the pro-Trump Facebook group who professed a desire to rape or kill her. Facebook refused to listen to her or HuffPost’s complaints about the harassment until she told them she was going to write an article about it. Only then was the pro-Trump group, the source of the harassment campaign, deleted.

The same thing happened with Jones and his Infowars conspiracy program. Facebook, YouTube (an arm of Google) and other platforms refused to remove Jones despite his well-documented history of inspiring and encouraging harassment of the parents of children killed in a 2012 school massacre in Newtown, Connecticut. He was only removed from the big digital platforms for breaking their rules on harassment after extended public pressure from the parents and Apple deciding to go first and boot him from their list of available podcast offerings on iTunes.

The problems with platform immunity are not limited to harassment. Digital platforms can also claim Section 230 immunity from discrimination laws. Sylvain uses Facebook’s targeted advertising platform as an example.

Facebook advertisers, say for housing or employment, have been allowed to exclude groups based on age, race, gender and “ethnic affinity,” according to ProPublica. The company could potentially claim that it is legally immune here, even though it publishes the specific categories that advertisers select from.

Back To The Courts

Although most Section 230 cases have fallen on the broad interpretation side of Zeran, there have been a few cases in the past decade that have limited the reach of Section 230.

In 2008, the U.S. Court of Appeals for the 9th Circuit determined that Roommates.com, a platform for finding shared housing rentals, could be held liable for violations of California’s fair housing laws because it required users to fill out a questionnaire that included questions about a potential roommate’s gender, age, sexual orientation and number of children and that had prepopulated answers in a drop-down menu. The court found that since Roommates.com had created the questions and the choice of answers, it was a publisher.

“[A] real estate broker may not inquire as to the race of a prospective buyer, and an employer may not inquire as to the religion of a prospective employee,” the court’s majority opinion stated. “If such questions are unlawful when posed face-to-face or by telephone, they don’t magically become lawful when asked electronically online. The Communications Decency Act was not meant to create a lawless no-man’s-land on the Internet.”

In 2017, Yasmeen Daniel, a survivor of a mass shooting in Wisconsin, sued the online gun sales platform Armslist for connecting her shooter, who was banned from purchasing a gun, with a private gun seller who did not perform a background check. The Wisconsin Court of Appeals allowed the lawsuit to proceed over the gun seller site’s claims of Section 230 immunity in April because the website promoted the ease of purchasing weapons for people legally barred from doing so.

In the 2018 case Maynard v. Snapchat Inc., the Georgia Court of Appeals found that the social media site could not claim Section 230 immunity for a car crash that occurred while the driver attempted to take a screenshot of an application that measured the speed of the car. The plaintiff, a passenger in the car, sustained permanent brain damage.

Attorney Carrie Goldberg is a leading critic of Section 230. Her law practice represents victims of online harassment, revenge porn and stalking.
Attorney Carrie Goldberg is a leading critic of Section 230. Her law practice represents victims of online harassment, revenge porn and stalking.
Astrid Stawiarz via Getty Images

Herrick v. Grindr is one ongoing case that implicates Section 230. Matthew Herrick’s jilted boyfriend decided to get revenge by creating hundreds of fake personas on Grindr, a mobile platform for LGBTQ dating, that told other users to go to Herrick’s real address to engage in sexual acts, often involving rape fantasies. Herrick repeatedly told Grindr about the fake personas (1,200 men showed up at his home), and while Grindr took them down, new ones kept appearing. Herrick eventually sued Grindr for negligence for failing to prevent the ongoing harassment.

Herrick’s case was taken up by Carrie Goldberg, a civil rights lawyer who specializes in revenge porn, online harassment, sexual assault and stalking, and Tor Ekeland, a specialist in cybercrime cases. His lawyers hoped to find a loophole in Section 230 by suing Grindr for fraud and deceptive business practices. The court, however, found in January that Section 230 protected Grindr.

“So, it’s a classic example of internet companies refusing to intervene in even the most extreme examples of crimes being perpetuated on their platform, even when they know about it, because they say they just don’t have to,” Goldberg said.

Herrick has appealed the decision.

‘The Worst Way To Handle These Problems’

Soon after the adoption of Section 230 in the 1990s, libertarian-minded scholars believed that the immunity the law provided would create a “marketplace of rules” among various online communities. One site might allow nudity or racism while another could be more restrictive (or inclusive). People could then choose which site they preferred to be on.

That’s not how it turned out in the current digital ecosystems, where the online speech platforms monopolize entire sectors to the point where it is practically impossible to not use them. “I don’t really think we can say there’s a marketplace for rules in social media,” Sylvain said. “Who’s a competitor to Facebook?”

Critics of Section 230 believe that court cases like these provide one avenue to change it.

“If we can’t get our lawmakers to change CDA 230, the courts are equipped to do so,” Goldberg said.

Courts could reinterpret Section 230 to overturn the Zeran precedent and make a distinction between publishers and distributors of online content. They could interpret the text of Section 230 to only apply to platforms acting as good Samaritans. Or they could claim that any repurposing of content on secondary markets, say Facebook or Google’s presentation of user content as data to be used by advertisers, would make a platform a publisher and forfeit its immunity.

Others believe that a legislative fix along these lines would be necessary. There is now precedent for such changes with the passage of FOSTA-SESTA. This anti-sex trafficking law, however, was highly controversial for precisely the reasons proponents of Section 230 say the law and its legal interpretation should stand. While intended to prevent sites that engaged in sex trafficking from claiming legal immunity, the law also forced the closure of sites used by consensual sex workers ― a form of collateral damage or censorship. Section 230 proponents have long said that the law’s critics wanted to gut it by adding exceptions for individual crimes.

While some Section 230 critics supported FOSTA-SESTA, others agree that it was the wrong way to go. Franks called it “the worst way to handle these problems.”

“Those of us who were considering a fix to Section 230 wanted a comprehensive fix, not a ‘Let’s add this exception’ and ‘Let’s add this exception,’” Franks said. “And so what you get with SESTA is really the worst. It is a piecemeal approach that’s not principled and not universal.”

In the meantime, some digital platforms, under pressure from the public, media and government, are actually taking steps to reduce harassment and bad behavior on their platforms. Twitter, YouTube and Facebook have all announced new policies, expanded hiring for moderators and appointed teams to examine their terms of service to see how they can do better to reduce harassment and combat threats to users on their platforms. Reddit took action years ago to deal with various abusive communities, but now it says it can’t root out hate speech from the site.

Franks, who works with Twitter to improve its rules against harassment, believes the effort by digital platforms to protect their users from harassment is a good step and that many employees and executives want to root out harassment not only because it’s the right thing to do but also because it is in the best interest of the companies from a user loyalty and financial standpoint.

But digital platforms have a long way to go.

“It’s a problem that they’re 10 years behind on,” Franks said. “This is the kind of thing that if you truly wanted to tackle the problem of online abuse you have to do it at the design stage, not on the backend.”

The problem was at the design stage. These companies knew that they would never be legally liable for any of that harassment.

This article was updated to include information about digital platforms removing Alex Jones on Monday.

|
Close

What's Hot