It took a looong time, but the federal government finally introduced its new version of the Online Harms Act a few weeks ago. I would have written this post a lot sooner, but I’ve actually been focusing on getting some scholarly writing done these past couple of weeks! Anyway, on to it.
The new Bill (C-63) deals with a lot of different ‘online harms’, including things like inciting violence, child pornography, child bullying, inducing harm on children, and disseminating intimate content without consent (including ‘deep fakes’), but the focus of my post here will be the hate speech provisions.
As regular readers may recall, I currently hold a big Social Sciences and Humanities Research Council grant to study the regulation of online hate speech in Canada, New Zealand, and the United Kingdom. These and other countries committed to implementing new laws on online hate speech in the aftermath of the horrific 2019 terrorist attack in Christchurch, New Zealand. We recently added Australia to our cases. The EU, the UK and Australia have recently passed legislation. In New Zealand, of all places, efforts appear to have collapsed.
In Canada, the Liberals introduced an initial bill in 2021, but it died on the order paper with the election call that year. The 2021 bill (C-36) was relatively modest in comparison to the new behemoth, and it had presented a problematic definition of hate speech as being “the content of a communication that expresses detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination.” While the concepts of “detestation” and “vilification” are taken directly from the Supreme Court’s articulation of “hatred” for the purposes of these types of laws, the definition in C-36 made it sounds as if it is merely the expression of “detestation” that transforms speech into unlawful hate speech. What the Court in fact emphasized is that the speech in question must be likely to expose individuals to detestation or vilification in the sense of inspiring enmity and extreme ill-will against them. It is not enough for Joe Smith to simply state “I detest Black people”. The definition in Bill C-36 was far too broadly worded to match the high threshold articulated by the Court for hateful speech to become unlawful hate speech.
It is somewhat of a pleasant surprise, then, that the government’s new bill repairs this problem by adding the necessary contextual detail to its definition. It prohibits “content that foments hatred” and defines it to mean “content that expresses detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination, within the meaning of the Canadian Human Rights Act, and that, given the context in which it is communicated, is likely to foment detestation or vilification of an individual or group of individuals on the basis of such a prohibited ground.” This latter clause is crucial. The bill’s text also adds: “For greater certainty and for the purposes of the definition content that foments hatred, content does not express detestation or vilification solely because it expresses disdain or dislike or it discredits, humiliates, hurts or offends.”
There is little danger, in light of the Court’s previous jurisprudence on hate speech, of this definition running afoul of the Charter’s free expression guarantee.
The problem, however, is how enforcement of this will operate. Bill C-63 makes changes to the Criminal Code and also amends the Canadian Human Rights Act to allow people to file complaints against individuals posting online hate speech (the old hate speech provisions in the CHRA were repealed back in 2013).
One of the changes to the Criminal Code is to impose a penalty of life imprisonment for those found guilty of advocating or promoting genocide. This provision has already been sharply criticized by observers, especially in light of legal disagreements about the definition of genocide in relation to both ongoing domestic and international events.
Beyond the severity of potential penalties, there are other important differences between Criminal Code provisions on hate speech and those in a human rights statute like the CHRA. For one thing, criminal laws against hate speech have a higher burden of proof, especially with regard to intent. By contrast, with statutory human rights laws all a complainant has to do is make their complaint, and the individual at the end of it becomes subject to a process that is very much like a judicial one but without all of the relevant protections a criminally accused person receives, including the right to face their accuser. In the statutory human rights law context, the process is the punishment, and given we know there will be complaints about expression that does not meet the high bar of unlawful hate speech, there will be people subject to (potentially very expensive) processes, and in some cases unjustifiably. This is one of the reasons the old CHRA hate speech provision was repealed; it was thought that the Criminal Code was the appropriate context for dealing with this social ill, and would generally be more fair, because for a case to proceed the prosecutor’s office would have to be fairly sure about a finding of guilt. When you allow any member of the public to submit a complaint, and there is a bureaucratic process that takes over from there, you lose the important benefits of prosecutorial discretion that exist in the criminal context.
Indeed, the most important innovation of the new Online Harms Act, and the biggest threat to expressive freedom it poses, is not really to the substance of what the bill seeks to regulate or prohibit, but the massive brand new bureaucratic machinery it is establishing to enforce everything. C-63 would establish a new “Digital Safety Commission” to enforce the law and a “Digital Safety Ombudsperson” to provide supports. Between the new Commission and the renewed responsibilities for the Canadian Human Rights Tribunal on hate speech incidents, unless these are set up with a very efficient first-stage processes to vet complaints and liberally dismiss frivolous or unworthy ones, one gets the sense that they could very well quickly become overwhelmed. Michael Geist has written more on this context here and has already suggested certain aspects be removed from the bill here.
The other core element of the bill is to shift responsibility to social media platforms/apps to regulate, reduce, or eliminate the various types of harmful material, to ensure users are able to block harmful content, and to be transparent about how they regulate content. Other jurisdictions, including the European Union and the United Kingdom, have similarly passed laws that place more onus on private companies to regulate content.
With respect to hate speech, there are specific challenges to this approach. Given the sheer volume of daily posts on platforms like Facebook and ‘X’ (Twitter), the only way to meaningfully assess large swaths of content is through the use of algorithms. The risks and flaws associated with algorithmic decision-making are already well known - there’s evidence that algorithms can be tricked, that they incorporate biases into their decision-making, and of course, that they can misidentify hate speech. Indeed, Facebook has already faced problems with ‘false positives’, such as in 2021 when its automated ad filter blocked a Jewish group’s efforts to raise awareness about anti-Semitism. (For a great read on the challenges of regulating social media, see Carissima Mathen’s chapter in my Dilemmas of Free Expression book).
There have been contexts where social media companies have shown that they can effectively, and relatively quickly, remove harmful material, as they did when ‘deep fakes’ created of Taylor Swift temporarily flooded certain sites earlier this year. But that incident apparently stemmed from efforts by AI users to intentionally bypass certain filters that sites use to restrict pornographic content, demonstrating that technology shouldn’t just be regarded as the solution to the problem but is part of the problem itself.
It is on that point that when it comes to hate speech, at least, the Online Harms Act really only tackles the symptom of the problem. Targeting hate speech doesn’t address the root problem: hatred itself. Hate speech, as I have written in my academic work, does not emanate from nowhere. A focus on censoring or hiding expression will not resolve the more pressing concerns that society is awash in racism, sexism and other forms of hate. It is perfectly reasonable for governments to try to tackle the problem of online harms, and even to regulate social media companies to do so (if they can get the regulation right), but we need more fundamental and longer-term policies to address hate in society.
To the extent there are any rights issues or constitutional infirmities with the bill beyond those discussed above, they will likely be exposed upon execution rather than on the face of the bill or how it defines hate speech. The government has yet to release its Charter statement on C-63, and we will see to what extent the Department of Justice has thought through some of the complexities from a rights perspective. But I suspect that if anything resembling the Bill on first reading actually passes, it will be less a field day for constitutional lawyers as it is for those in administrative law. I don’t envy whoever will be leading this new Digital Safety Commission!
I hope this passes and quickly. It would discourage people like Donald Trump from inciting violence against people they don't like.
An excellent development of the federal gov’t’s new and far better version of its Online Harms Act.