In June 2022, the Government introduced Bill C-27, an Act to
enact the Consumer Privacy Protection Act, the Personal Information
and Data Protection Tribunal Act, and the Artificial Intelligence
and Data Act. A major component of this proposed legislation
is a brand new law on artificial intelligence. This will be, if
passed, the first Canadian law to regulate AI systems.
The stated aim of the Artificial Intelligence and Data
Act (AIDA) is to regulate international and interprovincial
trade and commerce in artificial intelligence systems. The
Act requires the adoption of measures to mitigate
“risks of harm” and “biased
output” related to something called “high-impact
Ok, so how will this work? First, the Act (since
it’s federal legislation) applies to “regulated
activity,” which refers to specific activities carried
out in the course of international or interprovincial trade and
commerce. That makes sense since that’s what falls into the
federal jurisdiction. Think banks and airlines, for sure, but the
scope will be wider than that since any use of a system by
private sector organizations to gather and process data across
provincial boundaries will be caught. The regulated activities are
(a) processing or making available for use any
data relating to human activities for the purpose of designing,
developing or using an artificial intelligence system;
(b) designing, developing or making available
for use an artificial intelligence system or managing its
That is a purposely broad definition which is designed to catch
both the companies that use these systems and providers of such
systems, as well as data processors who deploy AI systems in the
course of data processing, where such systems are used in the
course of international or interprovincial trade and commerce.
The term “artificial intelligence system” is
also broadly defined and captures any “technological system
that, autonomously or partly autonomously, processes data related
to human activities through the use of a genetic algorithm, a
neural network, machine learning or another technique in order to
generate content or make decisions, recommendations or
For anyone carrying out a “regulated
activity” in general, there are record-keeping
obligations and regulations regarding the handling of anonymized
data that is used in the course of such activities.
For those who are responsible for so-called
“high-impact systems“, there are special
requirements. First, a provider or user of such a system is
responsible to determine if their system qualifies as a
“high-impact system” under AIDA (something to be
defined in the regulations).
Those responsible for such “high-impact
systems” must, in accordance with the regulations,
establish measures to identify, assess and mitigate the risks of
harm or biased output that could result from the use of the system,
and they must also monitor compliance with these mitigation
There’s more: anyone who makes a “high-impact
system” available, or who manages the operation of such a
system, must also publish a plain-language description of the
system that includes an explanation of:
(a) how the system is intended to be used;
(b) the types of content that it is intended to
generate and the decisions, recommendations or predictions that it
is intended to make; and
(c) the mitigation measures.
(d) Oh, and any other information that may be
prescribed by regulation in the future.
The AIDA sets up an analysis of “harm” which
is defined as:
- physical or psychological harm to an individual;
- damage to an individual’s property; or
- economic loss to an individual.
If there is a risk of material harm, then those using these
“high-impact systems” must notify the Minister.
From here, the Minister has order-making powers to:
- Order the production of records
- Conduct audits
- Compel any organization responsible for a high-impact system to
cease using it if there are reasonable grounds to believe the use
of the system gives rise to a “serious risk of imminent
The Act has other enforcement tools available,
including penalties of up to 3% of global revenue for the offender,
or $10 million, and higher penalties for more serious offences, up
to $25 million.
If you’re keeping track, the Act requires an
- plain old “harm” (Section 5),
- “serious harm to individuals or harm to their
interests” (Section 4),
- “material harm” (Section 12),
- “risks of harm” (Section 8),
- “serious risk of imminent harm” (Sections 17 and 28),
- “serious physical or psychological harm” (Section
All of which is to be contrasted with the well-trodden legal
analysis around the term “real risk of significant harm”
which comes from privacy law.
I can assure you that lawyers will be arguing for years over the
nuances of these various terms: what is the difference between
“harm” and “material harm”, “risk”
versus “serious risk”? and what is “serious
harm” versus “material harm” versus “imminent
harm”? …and what if one of these species of “harm”
overlaps with a privacy issue which also triggers a “real risk
of significant harm” under federal privacy laws? All of this
could be clarified in future drafts of Bill C-27, which would make
it easier for lawyers to advise their clients when navigating the
complex legal obligations in AIDA.
Stay tuned. This law has some maturing to do, and much detail is
left to the regulations (which are not yet drafted).
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.