What is the Online Harms Act (Bill C-63) in Canada?

What is the Online Harms Act
What is the Online Harms Act

The digital age has brought immense opportunities for communication, creativity, and connection. However, it has also exposed individuals, particularly young people, to various online risks, including cyberbullying, hate speech, and sexual exploitation.

Recognizing the need to address these challenges, the Government of Canada has introduced the Online Harms Act, also known as Bill C-63, to combat harmful online content and enhance internet users’ safety, especially children.


What is the Online Harms Act (Bill C-63) in Canada?

Understanding the Online Harms Act

The Online Harms Act proposes a comprehensive legislative and regulatory framework to establish a baseline standard for online platforms, including social media, live streaming services, and adult-content platforms, to ensure the safety of their users. The key objectives of the Act are:

  1. Protecting Children: The Act aims to protect children from exposure to harmful content, including sexual exploitation, cyberbullying, and self-harm promotion. Online platforms must implement age-appropriate design features and parental controls to safeguard children’s online experiences.
  2. Combatting Hate Speech: Hate speech, whether online or offline, poses a significant threat to societal harmony and individual well-being. The Act strengthens laws against hate speech by defining “hatred” in the Criminal Code and increasing penalties for hate propaganda offences. It also introduces a new hate crime offence applicable to any criminal act motivated by hate, with a maximum sentence of life imprisonment.
  3. Enhancing Reporting Mechanisms: Recognizing the importance of timely reporting in combating online crimes, the Act enhances the reporting mechanisms for internet child pornography offences. It centralizes mandatory reporting through a designated law enforcement body, imposes data preservation requirements on internet service providers, and extends the five-year prosecution limitation period.


Core Components of the Online Harms Act

1. Legislative and Regulatory Framework:

The Act mandates that online platforms adopt measures to reduce the risk of harm from specific categories of harmful content, such as sexual exploitation, hate speech, and violent incitement. Non-compliance with these obligations may result in strict penalties.

2. Digital Safety Commission:

The Act establishes a Digital Safety Commission of Canada to enforce the regulatory framework and oversee online safety initiatives. The Commission will audit compliance, handle user complaints, and set new standards for online safety.

3. Digital Safety Ombudsperson:

The Act creates a Digital Safety Ombudsperson to advocate for users’ interests and address systemic issues related to online safety. This independent authority will gather user feedback, conduct consultations, and publish reports to promote a safer online environment.

Categories of Harmful Content Addressed

The Online Harms Act targets seven categories of harmful content:

  1. Sexual Exploitation: Content that sexually victimizes children or survivors.
  2. Consent Violation: Intimate content posted without consent.
  3. Hate Speech: Content that promotes hatred based on protected grounds.
  4. Violent Extremism: Content that incites violent extremism or terrorism.
  5. Incitement to Violence: Content that encourages violence against individuals or groups.
  6. Cyberbullying: Content intended to bully children.
  7. Self-harm Inducement: Content that encourages self-harm among individuals, particularly children.

The introduction of the Online Harms Act represents a significant step towards creating a safer digital environment for Canadians. The Act holds online platforms accountable for harmful content and empowers regulatory bodies to enforce online safety measures.

The Act aims to protect individuals from the perils of online harm. As technology continues to evolve, it is essential to adapt legislative frameworks to mitigate emerging risks and safeguard the well-being of internet users, especially vulnerable populations like children.