Questions around freedom of expression are once again in the air, with concerns whirling around the Internet’s role in the spread of misinformation and intolerance and how to balance these legitimate concerns with the importance of maintaining spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech, including the principle of online intermediary immunity, one of the main principles that has allowed the Internet to flourish as vibrantly as it has.

The principle of online intermediary immunity provides internet platforms (e.g. Facebook, Twitter, etc.) with legal protections against being held liable for content generated by third-party users. These laws have multiple policy goals, ranging from promoting free expression and information access to encouraging economic growth and technical innovation. But balancing these objectives has proven complicated, as seen in debates around election time misinformation campaigns and so-called “cancel culture”. With the European Union reforming its major legislation on Internet regulation and the ongoing debate in the United States regarding similar reforms, it seems like a propitious time to look at how different jurisdictions implement their online intermediary liability laws.

Traditionally, the United States has provided some of the most rigorous protection for online intermediaries under the Communications Decency Act  (CDA) of 1996, section 230 of which bars platforms from being treated as the “publisher or speaker” of third-party content and establishes that platforms moderating content in good faith maintain their immunity from liability. However, the principle of online intermediary immunity has recently come under increased scrutiny with certain countries implementing legislation to restrict the principle and others considering doing so. For example, Germany’s Network Enforcement Act of 2018 (NetzDG) requires platforms to respond to reports of illegal speech within 24 hours, delete suspected criminal content and send reports to the police, with fines of up to €50 million for non-compliance. France’s Fighting Hate on the Internet law of 2020 also required online platforms to almost instantly take down material deemed obviously illegal, at risk of heavy fines and without judicial safeguards, but was struck down by the French Constitutional Court because of free speech concerns.

In the United States itself, there are also vocal calls to restrict online intermediary immunity protections. Republican Senator Josh Hawley of Missouri has introduced two pieces of legislation ― the Limiting Section 230 Immunity to Good Samaritans Act and the Ending Support for Internet Censorship Act ― to undercut the liability protections provided for in section 230 CDA by respectively limiting them to providers who use “neutral content” moderation practices and removing them altogether for platforms that curate political information. The bipartisan Platform Accountability and Consumer Transparency, introduced by Democrat Senator Brian Schatz of Hawaii and Republican John Thune of South Dakota, would require platforms to disclose their content moderation practices, implement a user complaint system with an appeals process, and remove court-ordered illegal content within 24 hours. Joe Biden has even called for the outright repeal of section 230 CDA.

These questions are particularly important in Canada which does not currently have any domestic statutory measures limiting the civil liability of online intermediaries. Although proposed with the laudable goals of combatting disinformation, harassment, and threats of violence, legislation that increases restrictions on freedom of speech such as the reforms described above should not be taken in Canada. The risks are too high that these types of measures will instead motivate platforms to actively engage in censorship, because of the prohibitive costs associated with the near impossible feat of preventing all objectionable content, especially for smaller providers. Instead, what is needed is national and international legislation that balances protecting users against harm while safeguarding their right to freedom of expression. 

 One possible model forward for Canada can be found in the newly signed free trade agreement between Canada, the United States, and Mexico, known as the United States–Mexico–Canada Agreement (USMCA). Article 19.17 USMCA mirrors section 230 CDA by shielding online platforms from liability relating to content produced by third party users, but adds the proviso that individuals who have been harmed by defamatory or other objectionable online speech may obtain equitable court-ordered remedies. This means that platforms continue to enjoy immunity from liability, but are required to take action against harmful content pursuant to a court order, such as taking down the objectionable material. USMCA thus appears to balance providing redress for online harms with protecting online platforms from liability related to user-generated content, and provides a valuable starting point for Canadian legislators considering how to reform Canada’s domestic online intermediary liability laws.