online censorship featured image

Executive Order on Preventing Online Censorship – Legal Implications

Legal AssistantResources Leave a Comment

President Trump recently signed an executive order to prevent online censorship. The short of it is – online social media platforms wield incredible power in their ability to influence the public opinion on what’s happening in the world.

These platforms are generally seen to be neutral arbitrators, allowing users to post content as they please, provided that it conforms to the community guidelines. However, according to Trump, these platforms moderate and censor what certain users post, based on their political affiliation. Hence, they exhibit bias in relaying political speech.

While it would be naïve to believe that this is utter hogwash, the reality is, there has been evidence of political bias in the past. From a legal standpoint, however, this order has the potential to create significant issues, with the most glaring one being stomping on the right to free speech and expression.

So, what does this executive order mean for social platforms like Twitter, Facebook, and Instagram? And, how does it affect your ability to freely express your opinion on political matters and any other issues you’re passionate about?

instagram image

This article explores in depth the legal implications of the latest Trump executive order.

The Telecommunications Act of 1996

Before getting into the legal implications, you need to go back to the beginning. In 1996, President Clinton signed into law the Telecommunications Act of 1996. This made major amendments to the Communications Act of 1934.

It set new ground rules for regulation and competition in virtually every sector of the communications industry. The provisions of the Act fall into five major areas also referred to as “Titles”:

  • Telecommunications services
  • Broadcast services
  • Cable services
  • Regulatory reform
  • Obscenity and violence

Title V of the Telecommunications Act of 1996: Obscenity and Violence

This fifth area of the Act is what is of interest here. It details provisions on indecent internet communications and that of any other computer network. This is known as the Communications Decency Act (CDA) of 1996.

It imposes criminal penalties on individuals who purposefully transmit obscene content across “an interactive computer service,” which the Act specifically defines to include the internet. It also outlaws the communication of obscene and indecent content with full knowledge that the recipient is under 18.

On the flip side, the CDA also offers a defense to content providers against the minors/indecency violations, but only if they prove that they’ve taken reasonable, effective, and appropriate steps to prevent or restrict access to said offensive content, by minors.

It also allows for “good Samaritan” blocking. This is done purely on a subjective basis, if they or the general public consider a particular piece of information disseminated on their platform, to be objectionable.

So, in short, commercial online content providers have the right, by law, to block content they deem inappropriate on their platforms. They can do this without any risk of civil or criminal prosecution. According to the Act, the Federal Communications Commission (FCC) has no jurisdiction to regulate the internet.

Section 230 of the Communications Decency Act

Tucked away inside the CDA of 1996 lies one of the most invaluable tools for protecting the freedom of expression and innovation, which has a profound impact on free speech and social media: Section 230.

This may seem somewhat ironic since the spirit of the original piece of legislation was to restrict free speech on the internet. This is because Section 230 was previously bound to a rather draconian-type crackdown on online indecency, driven by a moral panic over protecting minors from pornographic content.

If it were left as it was, the provisions of the rest of the CDA would have required age verification from all internet users, to not just porn sites, but to sites with user-generated content.

At the time it was enacted, the internet community was up in arms. People felt that it was a direct infringement of their freedom of expression.

The Electronic Frontier Foundation (EFF), a nonprofit organization that defends civil liberties in the digital sphere, challenged the anti-free speech provisions of the CDA in the Supreme Court. The Court ruled in their favor, and as a result, these offending provisions were struck down from the Act but leaving Section 230 intact.

Publisher vs. Platform

Section 230 of the CDA states that: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230).

This means that there’s a range of laws that protect online intermediaries that host or republish speech, from being held legally liable for what others say or do. The definition of “intermediaries” in this case, extends beyond conventional Internet Service Providers (ISPs). It also includes online platforms that allow third parties to publish content on their website.

Although there are exceptions for certain intellectual property and criminal-based claims, the spirit behind Section 230 is to create broad protection that allows free speech and online innovation to thrive.

Publisher Liability

The legal and policy frameworks provided for in the CDA that shield platforms from publisher liability are what has paved the way for millions of users to upload YouTube videos, publish reviews on sites like Yelp and Amazon, and post personal opinions on social media sites like Twitter and Facebook.

amazongo image

It would be naïve for anyone to expect online intermediaries to censor every single piece of questionable or objectionable content that gets published on their platform. To then hold them liable for what their users publish would be unfair, to say the least.

If that were the case, it would be in their best interest to not host any user-generated content, to shield themselves from publisher liability. Alternatively, they would need to actively censor what their users post to offset any potential suits against them.

The CDA 230 also accords the same level of protection to website owners and bloggers who act intermediaries when they:

  • Host comments on their posts left by their readers
  • Accommodate the work of guest bloggers on their website
  • Receive any tips or information sent to them via email or through their RSS feeds

The protections still apply even if the website owner is well-aware of the objectionable content and uses their editorial authority to censor the material or leave it as it is.

President Trump Executive Order

The Trump Twitter handle is no stranger to controversy. Following a night of intense protests in Minneapolis, after a police officer killed an unarmed African-American man while in custody, the President tweeted a historic phrase originally uttered by a Miami Police Chief in 1967, in the heat of the civil rights unrest at the time.

facebook image

Twitter hid the “…shooting starts when the looting starts…” tweet behind a message that stated it violated the site’s terms of service. But users could still click through if they wanted to view it.

Just two days before, Trump used his twitter handle to allege that the mail-in voting system would lead to massive fraud. In response, Twitter added a fact-check link for the first time to the Trump tweet.

The President responded by issuing an executive order on preventing online censorship.

Legal Implications

The executive order asserts that the protections against online censorship, as provided for in Section 230(c)(2) do not apply to social media platforms that moderate content in what would appear to be a “pretextual,” or deceptive way by stifling viewpoints that its owners disagree with.

It directed federal departments and agencies to review their marketing expenditure on online platforms that appear to be “problematic vehicles for government speech” due to viewpoint discrimination and other “bad practices.”

The order further states that social media censorship should not apply to large online platforms like Facebook and Twitter, as they are the modern-day versions of “public squares” and should, therefore, not restrict or limit protected speech.

It also asserts that online platforms may be misrepresenting their policies on moderation. It further encourages the Federal Trade Commission (FTC) to look into these deceptive or unfair practices and take action against the offending platforms.

The order also directs the FTC to consider whether the reports they receive on viewpoint-based moderation constitute “violations of law.”

The attorney general was also directed by the order to constitute a working group to look into avenues for enforcing state statutes that prevent online platforms from engaging in deceptive or unfair practices. The AG is further directed to come up with proposed federal legislation that promotes the goals of the executive order.

What This Means for the Existing Protection Laws Related to Social Media

While the proposals set out by the executive order are no doubt ambitious, the reality is, many of them will prove quite difficult to achieve.

searching on google image

For starters, revoking the social media liability protections as stipulated in the provisions of Section 230(c)(2), should the platforms be found to moderate content in a “pretextual or deceptive” way, doesn’t hold any merit, nor is it legally binding in any court.

Second, the FCC does not traditionally issue regulations regarding Section 230. Even its role in interpreting it doesn’t hold water. So, it remains unclear to what extent (if any) this federal agency can flex its muscles on the online censorship front.

Third, the insinuation that large social media platforms like Facebook and Twitter are the modern-day equivalents of public squares would mean making major changes to the existing First Amendment jurisprudence – which, quite frankly, is unlikely to happen.

Realistically, however, federal departments and agencies are likely to cut their marketing expenditure on social media platforms that appear to be censoring certain political viewpoints.

With regards to encouraging the prosecution of the online platforms for deceptive and unfair trade practices, this will be more effective as a political tool, as opposed to a legal one.

Keep in mind that, although the order is targeted at social media giants like Facebook and Twitter, its interpretation of Section 230, and any future regulations that may come up apply to all entities that have an interactive interface, forum, or online service that hosts user-generated content.

Social Media Censorship Examples

Social media sites can ban users for any reason. So, if you’re a user on their platform, you have to play by their rules.

For instance, in light of the recent civil unrest happening all over the country, the platforms may decide to ban users for posting off-color remarks, making racist comments, or even promoting white supremacy.

Now, if any individual who gets censored or banned decided to pursue a Twitter lawsuit, Facebook lawsuit, or Google lawsuit against the tech giants, the reality is, they (the individual) would lose 99 percent of the time.

Remember Laura Loomer? The conservative far-right activist who got kicked off Twitter for her anti-Muslim tweets against Ilhan Omar, the US Representative-elect for Minnesota? Well, the US Court of Appeals in Washington, DC, dismissed her suit.

She was suing Twitter, Facebook, Apple, and Google for violating her First Amendment rights by conspiring to suppress conservative content on their platforms. Other social media censorship examples include the case of:

  • Charles C. Johnson v. Twitter – Twitter defended its decision to terminate the plaintiff’s account on the basis that he posted threatening tweets. The terms and conditions on the site state that they can terminate a user account “for any reason.” He lost the suit.
  • Craig Brittain v. Twitter – Brittain’s lawsuit hinged on treating the social media platform as a publisher. CDA Section 230 contradicts these claims. He lost the suit as well.
  • Jared Taylor v. Twitter – The latter’s decision to ban the plaintiff was protected under the provisions of the CDA Section 230. He lost the suit.

Better Safe Than Sorry

Given that the executive order increases scrutiny on the moderation of online content, it’s a golden opportunity for companies and individuals to take stock of whether they host user-generated content on their platforms. If that’s the case, it’s the perfect opportunity to ensure that the moderation practices in place mirror the policies outlined in the site’s terms of service.

So, if you’re a platform owner looking to reduce the risk of a “deceptive and unfair trade practices” lawsuit, the best way would be to demonstrate good faith by removing any discrepancies between the stated moderation policies you have in place, and the actual practices.

Nonetheless, although the likelihood of any sweeping changes across interactive platforms is unlikely, it is still a good idea to be aware of the potential legal implications you could face when moderating user-content.

If you have any legal questions, feel free to chat online with a Laws101.com attorney. You’ll be put in touch with a lawyer who can give you legal guidance on your specific issue.

Leave a Reply