Author: Rachel Zuroff Page 1 of 2

Privacy Shield 3.0: The Future of Personal Data Transfers Under the New EU-U.S. Data Privacy Framework

After more than two years of a perilous environment for personal data transfers between the European Union (“EU”) and the United States (“U.S.”) and much negotiation between the parties, on October 7, 2022, President Joe Biden issued an Executive Order on “Enhancing Safeguards for United States Signals Intelligence Activities.” The Executive Orders paves the way to easing tensions around cross-border transfers between the EU and the U.S. and is a major step towards implementing a new EU-U.S. Data Privacy Framework (“DPF”). If successfully implemented, the DPF would allow personal data to be smoothly transferred from the EU to the U.S. for the first time since 2020 when the Privacy Shield Framework was invalidated by the EU Court of Justice (“CJEU”) in a landmark decision on data transfers, known as Schrems II.

Schrems II

Privacy afficionados will remember that in Schrems II, the CJEU invalidated the Privacy Shield Framework and ruled that companies moving personal data from the EU to the U.S. must ensure that there are extra measures in place to protect that information, such as standard contractual clauses (“SCCs”) and additional technical and organizational measures (“TOMs”). Prior to this decision, the European Commission had issued an adequacy decision for the Privacy Shield Framework which allowed companies to easily comply with EU requirements for transferring personal data by self-certifying under the framework and publicly committing to comply with its requirements. However, in Schrems II, the CJEU struck down the Privacy Shield Framework, finding that the U.S. did not offer an adequate level of data protection. The court’s ruling was primarily based on two findings. First, U.S. law, particularly Section 702 of the Foreign Intelligence Surveillance Act (“FISA”) and Executive Order 12333, do not limit surveillance programs to what is strictly necessary and proportional, thereby violating art. 52 of the EU Charter on Fundamental Rights (“EU Charter”). Second, U.S. law does not offer EU data subjects a right to an effective remedy or a fair trial in the U.S. in case their rights are violated by a surveillance program, and hence violate art. 47 of the EU Charter. Without a bilateral agreement in place between the EU and U.S., companies and regulators were left needing to conduct case-by-case analyses to determine what SCCs and TOMs were needed to ensure that any data transfers would be adequately protected to meet EU standards.

Following the Schrems II decision, digital rights advocacy group NOYB (found by Max Schrems) lodged 101 complaints in August 2020 with every data protection authority in the EU and European Economic Area. NOYB’s complaints allege that businesses using services provided by Google and Facebook which involve the transfer of personal data, such as Google Analytics and Facebook Connect, may no longer do so after Schrems II because the businesses cannot ensure an adequate protection of the transferred personal data. For context, Google Analytics is a service that can be integrated by websites such as e-commerce sites to track visitors and perform statistical analyses. Each visitor is assigned a unique identifier, and the identifier and associated data are then transferred by Google to the U.S. where the data is stored. Over the past two years, four European data protection authorities (“DPAs”), including Austria, Denmark, France, and Italy have found that the use of Google Analytics violates art. 44 of the General Data Protection Regulation (“GDPR”) because the tool allows personal data to be transferred outside of the EU without adequate safeguards against potential access by US intelligence agencies.

These decisions all share common features. For example, the DPAs found that the additional safeguards implemented by Google such as data encryption and IP anonymization were insufficient. First, because Google held the de-encryption keys which could be requested by US agencies under a FISA order along with the targeted data. Second, Google Analytics’ IP anonymization option was deemed to offer a form of pseudonymization rather than anonymization because users could still be identified with other data points, meaning that the users’ IP addresses still counted as personal data.  Given the widespread commercial use of services such as Google Analytics, these cases highlight the critical commercial need for an effective EU-U.S. agreement on data transfers.

Privacy Shield 3.0

The DPF aims to do just that by addressing the concerns underlying the Schrems II decision and restore flourishing trans-Atlantic data flows. To ensure that data transfers under the program meet the CJEU’s essential equivalence test, the Executive Order commits the U.S. to implementing new safeguards that ensure intelligence activities are undertaken only when necessary and proportionate. The Executive Order also creates a new independent and binding mechanism to allow EU individuals to seek redress if they believe they have been unlawfully targeted by U.S. intelligence activities.

First, to ensure that intelligence activities are undertaken only when necessary and proportionate, the Executive Order states that such activities shall only be conducted following a determination that they advance a legitimate national security objective and they do not disproportionately impact the protection of personal privacy and civil liberties. To meet this criterion, the activities must align with 12 “legitimate objectives,” such as protecting against threats to U.S. personnel, and avoid four prohibited objectives, such as suppressing freedom of expression or dissent. Finally, the Executive Order prescribes certain oversight mechanisms, such as regular independent reviews of whether intelligence activities strayed outside these bounds, mandating an independent Privacy and Civil Liberties Officer and Inspector General for each section of the intelligence sector, and requiring training on the Executive Order.  

To meet the second criterion of the CJEU’s essential equivalence test, namely that EU data subjects have a remedy in U.S. courts, the Executive Order creates a two-step redress system whereby qualifying complaints are entitled to a full investigation and an option to appeal to an independent body. First, the Director of National Intelligence’s Civil Liberties Protection Officer (“CLPO”) receives and investigates individuals’ claims that their rights have been violated. Following an investigation, the CLPO informs the complainant that the review either did not identify any violations or that the CLPO issued remediation measures. Once the CLPO concludes its investigation, an individual may move to the second tier of the system and apply for review to the Data Protection Review Court (“DPRC”). If the DPRC disagrees with the CLPO’s assessment, it may issue its own determination and remedial measures. The newly created DPRC is thus meant to address CJEU critiques by establishing a redress mechanism with independent judges and mandating that U.S. intelligence agencies comply with the measures stipulated by the Court.  

If the DPF struck down, that will probably toll the death knell for hope of re-establishing easy data transfers between the EU and U.S. until such time as FISA is amended.

However, the DPF will still need to receive an Adequacy Decision from the EU Commission. It will also probably face more judicial challenges from online privacy advocacy groups, such as NOYB. Finally, the CJEU may ultimately determine that legal data transfers between the EU and U.S. are impossible until Section 702 of FISA is amended. If the DPF struck down, that will probably toll the death knell for hope of re-establishing easy data transfers between the EU and U.S. until such time as FISA is amended. On one hand, perhaps this outcome could benefit EU businesses in that smaller, local companies would have the opportunity to provide digital services normally reserved by tech giants based on their monopolies of scale. It could also have the unfortunate outcome of stymying growth and competition because only the largest corporations would have the resources to implement SCCs and TOMs that would ensure an adequate level of protection for data transferred. In conclusion, a new agreement between the EU and U.S. is the most likely avenue to benefit businesses and consumers by reducing legal uncertainty and compliance costs.

This piece was originally published by the Chronique du CTI: https://www.blogueducrl.com/2022/11/chronique-du-cti-privacy-shield-3-0-the-future-of-personal-data-transfers-under-the-new-eu-us-data-privacy-framework/

Data Transfers under Quebec’s Bill 64

Quebec’s Act to Modernize Legislative Provisions respecting the Protection of Personal Information (“Bill 64”) is set to bring important changes to the province’s privacy frameworks. Chief among these changes are new requirements for cross-border transfers of personal information. As of September 22, 2023, businesses will need to complete the following steps before they transfer personal information outside of Quebec:

  1. Conduct a privacy impact assessment
  2. Confirm that the personal information will receive adequate protection in the target jurisdiction, and
  3. Enter into a written agreement that takes into account the results of the assessment and establishes terms to mitigate any risks that were identified.

Bill 64 is an important step in modernizing Quebec’s privacy regime, but the provisions related to data transfers have caused concern within the business community. This is because a significant amount of the personal information transferred out of Quebec goes to the US, and business leaders are worried that these transfers will no longer be allowed once the law comes into force.

For those seeking to predict how Quebec courts might apply the new requirements under Bill 64, recent developments within the European Union offer some food for thought.

For those seeking to predict how Quebec courts might apply the new requirements, recent developments within the European Union (“EU”) offer some food for thought. Europe’s General Data Protection Regulation (“GDPR”) remains the gold star for privacy legislation worldwide and likely inspired certain of Bill 64’s provisions. For example, under the GDPR, companies that transfer personal data outside of the European Economic Area (“EEA”) must ensure that the data continues to enjoy the same level of protection in the target jurisdiction.

Unlike Bill 64, the GDPR contains a number of mechanisms to facilitate data transfers to other jurisdictions. Among these are Adequacy Decisions by which the EU Commission declares that a certain country offers a comparable level of data protection to those required by the GDPR. This allows companies to easily transfer EU personal data to that country and can be critical for ensuring smooth commerce between major trading partners. The US, for instance, benefitted from one such Adequacy Decision known as the Privacy Shield up until July 2020 when the Court of Justice of the European Union (“CJEU”) invalidated the Adequacy Decision in the Shrems II decision. Since then, officials in the EU and US have been intently negotiating a new deal.

Although Bill 64 does not have a similar mechanism to facilitate international data transfers, these developments are still relevant because each Quebec-based business must determine for itself whether another jurisdiction offers adequate protections. If EU courts were to continue to find that the US did not offer adequate legislative safeguards for personal information, then it would seem likely that Quebec courts would follow suit, with important consequences for businesses using American data processing or storage services.

Indeed, the past couple weeks have only highlighted the importance of reconciling international data protection laws to allow for smooth cross-border transfers of personal information and facilitate global trade. For example, in January and February 2022, two European data protection authorities found that the use of Google Analytics by companies in the EU violates Article 44 of the GDPR. Given the ubiquity of personal data in modern commerce big and small, decisions like these are enough to give businesses cause for concern.  

However, in exciting news for the business world and privacy buffs on both sides of the Atlantic, the EU Commission and the US Government announced on March 25, 2022, that they have come to an agreement “in principle” on a new Trans-Atlantic Data Privacy Framework. Few precise details of the agreement are yet available, but both parties agree that the new framework will address the concerns raised by the CJEU in the Schrems II decision. These new changes make it more likely that businesses transferring personal information from Quebec to organizations certified under the new Privacy Shield will be compliant with Quebec law requirements. News of the agreement should, therefore, be happily received by Quebec businesses of all sizes.

Protecting Freedom of Expression Online

Questions around freedom of expression are once again in the air. While concern around the internet’s role in the spread of disinformation and intolerance rises, so too do worries about how to maintain digital spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech including through mechanisms such as the principle of online intermediary immunity, arguably one of the main principles that has allowed the internet to flourish as vibrantly as it has.

Laws that enact online intermediary immunity provide internet platforms (e.g., Facebook, Twitter, YouTube) with legal protections against liability for content generated by third-party users. Simply put, if a user posts illegal content, the host (i.e., intermediary) may not be held liable. An intermediary is understood as any actor other than the content creator. This includes large platforms such as Twitter where, for example, if a user posts an incendiary call to violence, Twitter may not be held liable for that post. It also holds for smaller platforms, such as a personal blog, where the blogger is protected from being held liable for comments left by readers. The same is true for the computer servers hosting the content.

These laws have multiple policy goals, ranging from promoting free expression and information access, to encouraging economic growth and technical innovation. But balancing these objectives against the risk of harm has proven complicated, as seen in debates about how to prevent online election disinformation campaigns, hate speech, and threats of violence. There is also a growing public perception that large-scale internet platforms need to be held accountable for the harms they enable. With the European Union reforming its major legislation on internet regulation, the ongoing debate in the United States regarding similar reforms, and the recent January 6 attack on Capitol Hill, it is a propitious time to examine how different jurisdictions implement online intermediary liability laws and what that means for ensuring that the internet continues to allow deliberative democracy and civic participation.

United States

Traditionally, the United States has provided some of the most rigorous protections for online intermediaries under section 230 of the Communications Decency Act  (CDA), which bars platforms from being treated as the “publisher or speaker” of third-party content and establishes that platforms moderating content in good faith maintain their immunity from liability. However, there are increasing calls on both the left and right for this to change.

Republican Senator Josh Hawley of Missouri introduced two pieces of legislation in 2020 and 2019 respectively ― the Limiting Section 230 Immunity to Good Samaritans Act and the Ending Support for Internet Censorship Act ― to undercut the liability protections provided for in section 230 CDA. If passed, the Limiting Section 230 Immunity to Good Samaritans Act would limit liability protections to platforms that use value-neutral content moderation practices, meaning that content would have to be moderated with absolute neutrality, free from any set of values, to be protected. However, this is an unrealistic standard, given that all editorial decisions involve choices based on value, be it merely a question of how to sort that content (e.g., chronologically, alphabetically, etc.) or the editor’s own personal interests and taste. The Ending Support for Internet Censorship Act also seeks to remove liability protections for platforms that curate political information, the vagueness of which risks aggressively demotivating platforms from hosting politically sensitive conversations and chilling free speech online.

The bipartisan Platform Accountability and Consumer Transparency (PACT), introduced by Democrat Senator Brian Schatz of Hawaii and Republican John Thune of South Dakota in 2020, would require platforms to disclose their content moderation practices, implement a user complaint system with an appeals process, and remove court-ordered illegal content within 24 hours. While a step in the right direction towards greater platform transparency, PACT could still endanger free speech on the internet by motivating platforms to remove any content that might be found illegal rather than risk the costs of litigation, thereby taking down legitimate speech out of an abundance of caution. PACT would also entrench the already overwhelming power and influence of the largest platforms, such as Facebook and Google, by imposing onerous obligations that small-to-medium size platforms might find difficult to respect.

During his presidential campaign, Joe Biden even called for the outright repeal of section 230 CDA with the goal of holding large platforms more accountable for the spread of disinformation and extremism. This remains a worrisome position and something that President Biden should reconsider, given the importance of section 230 CDA for prohibiting online censorship and allowing the internet to flourish as an arena for public debate.

Canada

Questions around to how ensure the internet remains a viable space for freedom of expression are particularly important in Canada, which does not currently have domestic statutory measures limiting the civil liability of online intermediaries. Although proposed with the laudable goals of combatting disinformation, harassment, and the spread of hate, legislation that increases restrictions on freedom of speech, such as the reforms described above, should not be taken in Canada. These types of measures risk incentivizing platforms to actively engage in censorship due to the prohibitive costs associated with the nearly impossible feat of preventing all objectionable content, especially for smaller providers. Instead, what is needed is national and international legislation that balances protecting users against harm while also safeguarding their right to freedom of expression. 

 One possible model forward for Canada can be found in the newly signed free trade agreement between Canada, the United States, and Mexico, known as the United States–Mexico–Canada Agreement (USMCA). Article 19.17 USMCA mirrors section 230 CDA by shielding online platforms from liability relating to content produced by third party users, but a difference in wording[1] suggests that under USMCA individuals who have been harmed by online speech may be able to obtain non-monetary equitable remedies, such as restraining orders and injunctions. It remains to be seen how courts will interpret the provision, but the text leaves room to allow platforms to continue to enjoy immunity from liability, while being required to take action against harmful content pursuant to a court order, such as taking down the objectionable material. Under this interpretation, platforms would be free to take down or leave up content based on their own terms of service, until ordered otherwise by a court. This would leave ultimate decision-making with courts and avoid incentivizing platforms to overzealously take down content out of fear of monetary penalties. USMCA thus appears to balance providing redress for harms with protecting online platforms from liability related to user-generated content, and provides a valuable starting point for legislators considering how to reform Canada’s domestic online intermediary liability laws.

Going Forward

The internet has proven itself to be a phenomenally transformative tool for human expression, community building, and knowledge dissemination. That power, however, can also be used for the creation, spread, and amplification of hateful, anti-democratic groups and ideas. Countries are now wrestling with how to balance the importance of freedom of expression with the importance of protecting vulnerable groups and democracy itself. Decisions taken today on how to regulate online intermediary liability will play a crucial role in determining whether the internet remains a place for the free and open exchange of ideas or a chill and stagnant desert. Although I remain sympathetic to the legitimate concerns that internet platforms do too little to prevent their own misuse, I fear that removing online intermediary liability protections will result in the same platforms having too much power and incentive to monitor and censor speech, something that risks being equally harmful. There are other possible ways forward. We could take the roadmap offered by article 19.17 USMCA. We could prioritize prosecuting individuals for unlawful behaviour on the internet, such as peddling slander, threatening bodily violence or fomenting sedition. Ultimately, we need nuanced solutions that balance empowering freedom of expression with protecting individuals against harm. Only then can the internet remain a place that fosters deliberative democracy and civic participation.

This piece was originally published by the McGill University Centre for Human Rights and Legal Pluralism Blog.


[1] CDA 230(c) provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” USMCA 19.17.2 instead provides that “No Party shall adopt or maintain measures that treat a supplier or user of an interactive computer service as an information content provider in determining liability [emphasis added] for harms related to information stored, processed, transmitted, distributed, or made available by the service, except to the extent the supplier or user has, in whole or in part, created or developed the information.”

Protecting the Right to Free Speech Online in Canada

Questions around freedom of expression are once again in the air, with concerns whirling around the Internet’s role in the spread of misinformation and intolerance and how to balance these legitimate concerns with the importance of maintaining spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech, including the principle of online intermediary immunity, one of the main principles that has allowed the Internet to flourish as vibrantly as it has.

The principle of online intermediary immunity provides internet platforms (e.g. Facebook, Twitter, etc.) with legal protections against being held liable for content generated by third-party users. These laws have multiple policy goals, ranging from promoting free expression and information access to encouraging economic growth and technical innovation. But balancing these objectives has proven complicated, as seen in debates around election time misinformation campaigns and so-called “cancel culture”. With the European Union reforming its major legislation on Internet regulation and the ongoing debate in the United States regarding similar reforms, it seems like a propitious time to look at how different jurisdictions implement their online intermediary liability laws.

Traditionally, the United States has provided some of the most rigorous protection for online intermediaries under the Communications Decency Act  (CDA) of 1996, section 230 of which bars platforms from being treated as the “publisher or speaker” of third-party content and establishes that platforms moderating content in good faith maintain their immunity from liability. However, the principle of online intermediary immunity has recently come under increased scrutiny with certain countries implementing legislation to restrict the principle and others considering doing so. For example, Germany’s Network Enforcement Act of 2018 (NetzDG) requires platforms to respond to reports of illegal speech within 24 hours, delete suspected criminal content and send reports to the police, with fines of up to €50 million for non-compliance. France’s Fighting Hate on the Internet law of 2020 also required online platforms to almost instantly take down material deemed obviously illegal, at risk of heavy fines and without judicial safeguards, but was struck down by the French Constitutional Court because of free speech concerns.

In the United States itself, there are also vocal calls to restrict online intermediary immunity protections. Republican Senator Josh Hawley of Missouri has introduced two pieces of legislation ― the Limiting Section 230 Immunity to Good Samaritans Act and the Ending Support for Internet Censorship Act ― to undercut the liability protections provided for in section 230 CDA by respectively limiting them to providers who use “neutral content” moderation practices and removing them altogether for platforms that curate political information. The bipartisan Platform Accountability and Consumer Transparency, introduced by Democrat Senator Brian Schatz of Hawaii and Republican John Thune of South Dakota, would require platforms to disclose their content moderation practices, implement a user complaint system with an appeals process, and remove court-ordered illegal content within 24 hours. Joe Biden has even called for the outright repeal of section 230 CDA.

These questions are particularly important in Canada which does not currently have any domestic statutory measures limiting the civil liability of online intermediaries. Although proposed with the laudable goals of combatting disinformation, harassment, and threats of violence, legislation that increases restrictions on freedom of speech such as the reforms described above should not be taken in Canada. The risks are too high that these types of measures will instead motivate platforms to actively engage in censorship, because of the prohibitive costs associated with the near impossible feat of preventing all objectionable content, especially for smaller providers. Instead, what is needed is national and international legislation that balances protecting users against harm while safeguarding their right to freedom of expression. 

 One possible model forward for Canada can be found in the newly signed free trade agreement between Canada, the United States, and Mexico, known as the United States–Mexico–Canada Agreement (USMCA). Article 19.17 USMCA mirrors section 230 CDA by shielding online platforms from liability relating to content produced by third party users, but adds the proviso that individuals who have been harmed by defamatory or other objectionable online speech may obtain equitable court-ordered remedies. This means that platforms continue to enjoy immunity from liability, but are required to take action against harmful content pursuant to a court order, such as taking down the objectionable material. USMCA thus appears to balance providing redress for online harms with protecting online platforms from liability related to user-generated content, and provides a valuable starting point for Canadian legislators considering how to reform Canada’s domestic online intermediary liability laws.

How Canada can and should respond to twin challenges of COVID-19 and climate change

Concerns over climate change in Canada and abroad have been rapidly gaining speed and momentum in recent years. Now, amidst the global health crisis presented by COVID-19, governments, action groups, and citizens around the world are reflecting on how we can address these twin crises and build a better future going forward. One such strategy is to attempt to leverage the pandemic into an opportunity by tying government stimulus packages to efforts to keep global warming within 1.5°C to 2°C and to encourage businesses to transition away from a fossil-fuel-based economy.

On May 11, 2020, Canada joined the ranks of countries doing just that. As part of Canada’s COVID-19 Economic Response Plan, Prime Minister Justin Trudeau announced the Large Employer Emergency Financing Facility (LEEFF), a program meant to provide loans to large Canadian employers who have been affected by the COVID-19 outbreak. In line with global calls for sustainable stimulus packages, the program requires companies that receive government assistance to publish financial disclosure reports with information on climate‑related risks. The report must highlight how the company’s strategies, policies, and practices will help manage climate-related risks and opportunities, and contribute to achieving Canada’s commitments under the Paris Agreement and the goal of net-zero emissions by 2050.

Measures like these are critical for ensuring that efforts to repair the economic damage caused by COVID-19 do not derail efforts to limit global-warming. The post-pandemic recovery period will be a decisive moment for fending off climate change. It is vital that as Canada’s economy re-opens, we avoid making the same mistakes as in the 2008 financial crisis, when government measures to stimulate economies largely ignored environmental consequences.

And if you think this proposal sounds too much like “hippie idealism” or “leftist hogwash”, consider that McKinsey & Company ― a company not known for indulging in sentimentality or misplaced optimism ― has reported that a low-carbon recovery could not only help reduce emissions, but also create more jobs and economic growth than a high-carbon recovery. Consider also that companies and investors are already showing buy-in as they recognize the financial risks that climate change poses to their investments and the need for risk disclosure mechanisms. Just recently, Morgan Stanley announced that it will begin publicly disclosing how much its loans and investments contribute to climate change, becoming the first major U.S. bank to do so.

As the world moves into a “new normal”, much remains uncertain. What is clear though is that we have reached a pivotal moment to prevent catastrophic global warming. The government’s announcement of LEEFF is a positive step in the right direction. But we need more than just risk disclosure. We need our federal and provincial governments to invest in a low carbon recovery from the COVID-19 economic crisis to ensure that as we move out of one crisis, we do not precipitate another, possibly greater, one.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén