An organization representing more than 150 Canadian tech companies is calling on the country to take a sensitive but speedy approach to artificial intelligence.
In a report released Friday, the Council of Canadian Innovators (CCI) argues the country has an opportunity to be a leader in the global AI sector, currently valued at $299-billion and projected to reach $2-trillion by 2030.
However, CCI said Canada must ensure any AI regulation is “responsible,” blending clarity, trust and lessons other nations have learned from trying to reign in the dangerous side of the technology.
“We do want people to move quickly but intelligently to make sure that there are pieces that make sense and that people can trust (regulation), but we also really want a framework that is going to prove durable,” said Laurent Carbonneau, CCI’s director of policy and research.
CCI’s urgings come as the technology has garnered an explosion of interest in recent months because of big developments to generative AI systems, which can create text, images, code and other content in response to user prompts.
While the technology holds great promise and is expected to bring efficiency and accuracy to difficult and mundane tasks, others warn it poses existential risk and could cause unemployment, misinformation, bias and discrimination.
Canada hopes to balance the potential rewards and risks in the Artificial Intelligence and Data Act it tabled last summer with an aim to ensure AI does not result in serious harm to individuals.
Should the act known as Bill C-27 pass, the council would like the regulation development stage and implementation to be expedited.
“If we create an environment where there’s uncertainty over a long period of time, like if we’re talking like three years before there’s a total rollout, that would be bad,” said Carbonneau.
In the lead-up to any rollout, CCI hopes any rules and standards are designed to be clear and easy to understand but also give innovators enough space to launch pilots or experiment.
Regulation must also consider the full gamut of an AI’s uses and possible impacts and potentially, incorporate a tiered structure with corresponding rules and responsibilities for specific applications of AI, CCI said.
The European Union’s AI legislation due to come into effect in 2026 has a tiered system, where an AI model is deemed to have unacceptable, high, low or minimal risk and then regulations based on the severity level are applied.
Categories using constant facial recognition, for example, are deemed “unacceptable” and banned.
Systems with automated decision making for job applications, admissions to educational institutions or biometric identification would be considered high risk. Anybody behind a high-risk system must ensure it includes human oversight and cybersecurity measures and notify the government before deploying it.
Rather than create a new, dedicated regulatory body or single legal model for AI, the U.K. government wants regulators to tailor strategies for individual sectors that consider safety, transparency, fairness, accountability and redress. It remains open to legislation should it be needed, but has not made moves toward developing new laws, CCI said.
Meanwhile, in the U.S., most AI regulation is happening at the state level, though the White House’s Office of Science and Technology Policy released a blueprint for an AI Bill of Rights in December.
Because Canada does not have the market size or clout of the E.U. or the U.S., CCI argues the country “should take care to ensure that its eventual governance model does not stray too far from the emerging global norm.”
“An outlier policy mix in Canada would drastically harm the efforts of Canadian-headquartered companies to scale globally and to contribute to Canadian economic and productivity growth and innovation,” the council argued.