Section 230 of the U.S. Code has suddenly become one of the government’s hottest topics, thanks in part to a recently released Executive Order penned by President Trump. As of the date of this article, there is pending legislation, a newly-released Department of Justice report, and at least one Senatorial request for the Federal Communications Commission to review and redefine the text of 47 U.S.C. § 230. Section 230 of the Communications Decency Act protects online platforms from liability for third-party content, as well as their ability to moderate the content posted by others.
Section 230 has been under attack for a while. The President had already proposed an Executive Order in August of 2019 limiting Section 230 applicability and protections, which did not make any inroads. The Electronic Frontier Foundation (“EFF”), an organization whose mission is to “defend[] digital privacy, free speech, and innovation,” testified in favor of preserving Section 230 through its legal director Corynne McScherry, by stating that “[c]hipping away at the legal foundations of the internet” was not the way to accomplish the goals of a “free” internet in which we can “exercise control over our online environments.” Then, earlier this year the EARN-IT bill (“Eliminating Abusive and Rampant Neglect of Interactive Technologies Act”) introduced on March 5, 2020 by Sens. Graham and Blumenthal specifically called for removal of Section 230 protections for online services which fail to meet best practice requirements recommended by an as-yet nonexistent commission; this bill, called a “plan to scan every message online,” has been criticized for the fact that the proposed commission would be “completely controlled” by the Attorney General and law enforcement groups, and referred to as “a vehicle for creating a law enforcement wish list” that threatens encryption of online communications. Finally on May 28, 2020 following two incidents on Twitter, President Trump issued an Executive Order titled “Executive Order on Preventing Online Censorship” ostensibly to promote free speech on the internet and to alter the text of 47 U.S.C. § 230.
Enacted in 1996, Section 230 strikes a delicate balance between creating a safe harbor from unwarranted litigation for online platforms, and the ability of, and necessity for, these platforms to act as “good faith” watchdogs over content posted on their sites. It seeks to protect these platforms from overwhelming liability while affording them the freedom to create and manage their technology, as well as their own policies and terms of use. This enactment was in reaction to the Stratton Oakmont, Inc. v. Prodigy Services Co. decision, where Prodigy was found liable for defamatory material posted to its online bulletin board system because it actively “moderated” some content. While Prodigy could never have moderated everything posted to its site, its action in moderating – whether as a response to complaints or in setting standards it occasionally enforced – was viewed by the Stratton Oakmont court as a significant enough step taken by Prodigy to act as a traditional news editor or publisher would. Thus, the Stratton Oakmont court held that Prodigy’s moderation made it a “publisher” of the defamatory material, despite its not having authored the content and despite the fact that it could not and did not actively monitor all content. Section 230 was precisely designed to remove that publisher liability and to incentivize platforms to welcome third-party content, creating the Internet space we know today.
With a few exceptions (originally related to criminal law and intellectual property violations), an online platform cannot be held liable for the content posted onto its site by users. Subsection (c)(1) states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This Section 230 protection led directly to the rise of social media as a means of communication – users can post content, platforms can avoid litigation if that content is illegal – all of which has unquestioningly been beneficial. So why come after Section 230?
Trump’s May 28 Executive Order centers its directives on subsection (c)(2)(A), the moderation clause, which grants immunity for “any action . . . in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” By this language, Twitter, Facebook, and other online platforms have leeway to determine which content is acceptable for publication on their sites per their own terms of service, thus the ability to moderate, place warnings upon, and fact-check posts made by users of the sites. The Executive Order seeks to change this approach in several ways. After initially laying out the “policy” of the administration, it directs “all executive departments and agencies” to “ensure that their application of section 230(c) . . . reflects the narrow purpose of the section.” It further directs the Secretary of Commerce and the Attorney General, via the National Telecommunications and Information Administration to file a petition for rulemaking with the FCC; in other words, the Order demands that the NTIA ask the FCC to consider proposing regulations addressing Section 230. It then requests that each department and agency review its federal spending on advertising and marketing directed at online platforms, that the Department of Justice review the “viewpoint-based” restrictions imposed by the platforms, and that the Federal Trade Commission “consider taking action” to prevent “unfair or deceptive” acts of commerce pursuant to the White House’s internal “Tech Bias Reporting tool.”
It is unclear whether the Order can stand up to legal scrutiny: the President issued this Order to ask the agencies of the United States, specifically the FCC, to consider new rulemaking to “clarify” Section 230 in line with the purported policy of the administration, but the FCC has no authority to dictate changes to the language or interpretation of 230. “The key language [of 230] . . . leaves no ambiguity to enable . . . FCC action at all.” Given the lack of ambiguity, the FCC requires direction from Congress before it can regulate. That might not matter, however, since a definite effect of the Order was to spur Congress into action.
Members of Congress and the Department of Justice immediately responded to the Order with proposed legislation, requests to the FCC to redefine 230, and – in the DOJ’s case – a review of the section and published list of recommendations for reform. Sens. Rubio (R-FL), Hawley (R-MO), Loeffler (R-GA) and Cramer (R-ND) asked the FCC to “clearly define the framework under which technology firms, including social media companies, receive protections under Section 230.” The DOJ report also focuses on redefining 230 as only providing immunity for moderation of content which is “illicit,” including a “carve-out for bad actors who purposefully facilitate or solicit content that violates federal criminal law.” It requests a statutory definition of “good faith” on the part of the platform and “plain and particular” terms of service echoing the policy of only moderating illicit content, which would be accompanied by a “reasonable explanation” for any moderation decisions. The House bill filed by Lamborn (R-CO) takes a different route, suggesting specific censorship on the part of Twitter against the President. Platforms themselves have begun speaking on the issue: Facebook responded to the Executive Order with a statement that it is a “platform for diverse views” and that “repealing or limiting section 230 . . . will restrict more speech online, not less.”
What are the real regulatory or legislative options? Congress could choose to make minor revisions or suggest an entire repeal; however, the loss of Section 230 would inarguably throw social media platforms into a state of disarray. Platforms “simply wouldn’t be able to exist with the risk that republishing content could bring.” The “free” internet where we could exercise “control” of our online environments would not exist. The alternative would be to reclassify online platforms and social media as “public utilities” subject to rulemaking and regulation under government agencies (such as the FCC and FTC), a concept which was suggested in 2017 by Steve Bannon, then later in 2018 by Steve King (R-IA) during the Zuckerberg hearing, and one which has recently been considered by scholars. FCC involvement also raises the possible regulatory model of treating social media companies similarly to broadcast networks, rendering them subject to a public interest requirement similar to the “Fairness Doctrine” of 1949 which attempted to ensure “balanced and fair” coverage by broadcast media. That doctrine was ultimately held to violate the First Amendment due to its unintended effect of harming public interest and violating the free speech rights of broadcasters.
Although the public utility option would extend First Amendment guarantees to all users, reclassifying social media as a public utility would fundamentally alter the technology industry in the U.S. by prohibitively increasing platforms’ costs and greatly deterring innovation , rendering it increasingly difficult if not impossible for new and smaller companies to enter the social media space. Declaring social media a public utility comes at an administrative cost, as well: “Administrative difficulty is an important drawback . . . [R]eforms on this order demand that government develop new organizational and technical processes, increase staffing, and perhaps create new task forces or agencies. Public utility options . . . would incur significant cost either through reallocating current government funds or collection of new revenues.”
Cost aside, social media is not an actual utility in the truest sense of the word. Social media platforms are in and of themselves innovative giants and they rely on that innovation to thrive. They are dynamic entities which create and manage their own policies, APIs (application programming interfaces), algorithms, web design, and base code. One scholar argues: “If Twitter, Google, Apple, Facebook, Amazon, or any other social media platform were forced to surrender control of its APIs to regulatory officials, this would significantly undermine the firm's right and ability to control one of its most valuable assets-perhaps its only monetizable asset.” Moving that control to government hands and requiring government approval for proposed changes would “force [social media] to stop innovating at the speed of Silicon Valley and start innovating at the speed of Washington bureaucracy, which is to say, extraordinarily slowly.”
Each platform is unique, offering a distinctly different service from its competitors. Each platform is based on its own proprietary IP, and each platform controls its own updates, integrations, and user base. To subvert that IP, to frame it in context of regulatory authorities’ decisions about what each platform can do and how they can do it, would be to stifle the industry’s dynamism and innovation. Forcing a platform to conform to a set of rules overseen by the FCC or any other government agency would create a social media future in which platforms were relegated to, if not carbon copies of each other, at least far less unique entities which could not adapt readily to changing needs. Instead, those platforms would have to rely on government agencies to tell them how to react to new technologies and global internet environments.
The only workable option appears to preserve most if not all of Section 230 as it stands. If Congress is to revisit Section 230, it needs to take a fresh look at the section in light of the alternative, the public utility option, and realize that Section 230 – designed by both a prominent Democrat and a prominent Republican, with broad bipartisan support – is doing precisely what it was intended to do.
Any requests to revisit Section 230 will likely fall under scrutiny for attempting to limit platforms’ speech (and already have; platforms’ moderation decisions, warnings, and labels could be viewed as their own speech, which is itself generally protected). Even before the DOJ report was issued, a suit was filed by the Center for Democracy & Technology on June 2, 2020 alleging that Trump’s Executive Order itself violates the First Amendment and that it “was intended to have, and is having or likely to have, the effect of chilling the constitutionally protected speech of online content platforms.” EFF shares this view, stating that the Order violates the First Amendment and, in a later article, suggesting that the Order and EARN IT bill seek to “damage online users’ speech.”
The Order has spurred debate in government and politics which have caused both sides to argue that something needs to change. Even if Section 230 is imperfect, any alternatives being considered are markedly more problematic. The Order has likely already frightened some platforms, particularly smaller ones which cannot afford a change in their legal liability framework or to lose government advertising dollars, into backing off from moderating content. The Order, the ensuing reactions, the debates over freedom of speech in both the media and the government, and the already-pending litigation all point to the probability that Section 230 is here to stay – at least for the short term.
By: Pierre Ciric, Esq. Member of the Firm, the Ciric Law Firm, PLLC
& Layla Maurer, 2L, Case Western Reserve University School of Law
This material is for general information purposes only and should not be construed as legal advice or any other advice on any specific facts or circumstances. No one should act or refrain from acting based upon any information herein without seeking professional legal advice. The Ciric Law Firm, PLLC makes no warranties, representations, or claims of any kind concerning the content herein. The Ciric Law Firm, PLLC and the contributing presenters or authors expressly disclaim all liability to any person in respect of the consequences of anything done or not done in reliance upon the use of contents included herein.
This material is subject to the copyright laws of the United States and cannot be reproduced without the prior written permission of the Ciric Law Firm, PLLC. Copyright © 2020