What non-profits need to know about AI policy-making in Canada

Canadian non-profits need to get involved in advocacy relating to artificial intelligence policy. It’s complex and fast-moving, contributor Katie Gibson writes. Here’s what you need to know to get started.

Canadian non-profits need to get involved in advocacy relating to artificial intelligence policy. It’s complex and fast-moving, contributor Katie Gibson writes. Here’s what you need to know to get started.


AI policy-making is happening right now. And it’s happening everywhere. Not just on the floor of the House of Commons, but in your local police force and public-school board.

No matter your driving purpose, now is the time to think about how artificial intelligence might affect the issues and communities that your organization is focused on:

The list goes on.

As they say, “If you’re not at the table, you’re on the menu.” Policy-makers will not always invite you to dinner. Indeed, Innovation, Science and Economic Development Canada (ISED) has been criticized for inadequately consulting with groups affected by the proposed Artificial Intelligence and Data Act. “The inadequacy of this consultation process has resulted in a gravely and fundamentally flawed bill that lacks democratic legitimacy,” according to an open letter written by Canadian civil society organizations. When the Assembly of First Nations and tech founder and investor Jim Balsillie both raise concerns about lack of consultation, there may be a problem.

Many non-profits have already begun to engage in these conversations. But with the exponential growth in AI policy-making, we need a corresponding increase in non-profit engagement. To date, non-profits focused on civil liberties, privacy, and technology have been well represented – as demonstrated by the signatories of the open letter. We need many more, and more diverse, non-profits bringing their perspectives to the policy process.   

What you need to know: The key tensions at the heart of AI policy-making

To make your voice heard in the AI policy process, you don’t need a comprehensive understanding of the technology or the proposed policy instruments. Those change, sometimes daily. What you do need is an appreciation of the key tensions at the heart of AI policy-making in Canada. Those affect the entire policy-making process – whether as text or subtext. Understanding these tensions gives you the solid footing to engage. Below are some of the tensions to be aware of, although a complete accounting would of course require a much longer discussion.

1. Conflicting objectives of AI policy

AI policy-making is rife with conflicting, often incompatible, objectives. The policy-maker’s job is to set objectives, analyze options, and choose a course of action. But much AI policy activity starts with unclear objectives. Are we seizing benefits, and if so, which ones? Are we managing risks or harms and, again, which ones?

Many jurisdictions began by making policy intending to increase productivity and innovation. These policies focus on AI research and development, commercialization, and adoption by businesses. One example is Canada’s 2017 Pan-Canadian Artificial Intelligence Strategy, which was expanded in 2022. ISED, Canada’s innovation ministry, was responsible for this strategy, reflecting its industry focus.  

This focus on winning the global AI race is framed in terms of not only economic growth but also national security. The massive technological disruption is happening at a moment of heightened geopolitical tension. These considerations weigh heavily in AI policy-making, especially at the national level.  

More recently, policy efforts have sought to regulate the technology and its applications to minimize undesirable social and economic impacts. These negative impacts run the gamut – from human rights violations to worker displacement. This is where the Artificial Intelligence and Data Act comes in; it was introduced in June 2022 to establish common requirements for AI systems and prohibit conduct that may result in serious harm.

There is a lot of policy-making focused on ‘AI for not bad’ – sometimes branded ‘responsible AI’ – but much less emphasis on ‘AI for good.’

Canada’s 2024 federal budget announcement foregrounds these tensions. The majority is focused on building the superhighway to an AI economy: $2 billion for computing infrastructure and $300 million for AI start-ups and adoption. A small portion – around $100 million – erects guardrails on that highway to remediate job loss and safety concerns.

There is a lot of policy-making focused on “AI for not bad” – sometimes branded “responsible AI” – but much less emphasis on “AI for good.” AI for good often enters the conversation in the guise of AI use cases for healthcare, education, or environmental outcomes. Non-profit advocates concerned with social outcomes, then, should identify the objectives they would like centred. If the conversation is dominated by industry, economic objectives will hold sway. Consider what is important to you and your stakeholders – and make that case to policy-makers.

2. Shifting definitions of AI

Policy-making about AI has proceeded without an agreed definition of AI. This is unusual and lends credence to suggestions that “artificial intelligence” is not, itself, a proper subject of regulation – that it is akin to regulating “software” as a whole.

That said, we are witnessing convergence around the definition of an “AI system” proposed by the Organisation for Economic Co-operation and Development. The OECD offered a definition in 2019 and updated it in 2023. The European Union’s landmark Artificial Intelligence Act (AI Act), the world’s first comprehensive AI law, draws on the updated OECD definition.

Non-profit advocates should remember that generative AI is just one type of AI. Predictive AI can be as useful, and as damaging, as generative AI.

Generative AI’s dramatic entrance muddied the definitional waters. When ChatGPT was released on an unsuspecting public in November 2022, generative AI hogged the spotlight. With ChatGPT as their primary reference point, many began to conflate AI and generative AI. This, despite the prevalence of predictive AI applications, such as song or movie recommendations. A raft of new policies deal exclusively with generative AI, such as the Government of Canada’s guide on its own use of generative AI.

Non-profit advocates should remember that generative AI is just one type of AI. Predictive AI can be as useful, and as damaging, as generative AI. When AI systems used in hiring, housing, banking, or penal systems lead to biased decisions, this is predictive AI at work.

3. Safety versus fairness emphasis

Over the past several years we have witnessed a tug of war between prioritizing the “safety” of AI and the “fairness” of AI. Arguably, safety has won the war by expanding its ambit to include fairness. This is an over-simplification, but it is a useful heuristic for understanding AI policy debates.

Originally in the safety camp were technologists and scientists, for the most part. Some saw the negative effects of AI as simply an engineering problem to solve. How do we train AI models to produce the results we intend? Others focused on the potential catastrophic risk posed by AI systems that could wrest the steering wheel from less-intelligent humans and drive us all off a cliff. The Ontario government’s Beta Principles for Ethical Use of AI released back in January 2022 illustrate this narrow view of AI safety as primarily an alignment problem: “Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended” (my emphasis).

This focus on safety was institutionalized by the previous UK government, which held a global AI Safety Summit, resulting in The Bletchley Declaration. And it was perhaps epitomized by an open letter published in March 2023 that sought a six-month pause on training powerful AI systems, signed by tech leaders including Elon Musk. The United Kingdom now has an AI Safety Institute, whose mission was described as “to minimise surprise to the UK and humanity from rapid and unexpected advances in AI.” The United States, Japan, Singapore, the EU, and Canada are standing up their own versions in quick succession.

By contrast, fairness advocates have focused on harms happening today. They often employ a human rights framework. Bias, discrimination, and inequity are key words. Both individual and collective harms are considered. For example, AI experts including Timnit Gebru criticized the open letter seeking the six-month pause: “It is indeed time to act: but the focus of our concern should not be imaginary ‘powerful digital minds.’ Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.”

Non-profit advocates should consider adopting and co-opting the language of AI safety . . . and advocate for definitions that include the issues we care about.

We are now witnessing definitional creep: “safety” has expanded to include a whole range of risks, including bias and discrimination. By the time of the AI Summit in Seoul in May 2024, the International Scientific Report on the Safety of Advanced AI addressed a range of risks, from disinformation to environmental impact to bias and underrepresentation.

Non-profit advocates should, then, consider adopting and co-opting the language of AI safety. You should advocate for definitions that include the issues we care about. If, as this government has announced, Canada is investing $50 million in a new AI Safety Institute, that institute should focus on the risks and harms relevant to your community.

4. Government regulation versus self-regulation

The governance tool kit for AI is still being assembled, but it covers the field, from voluntary statements of principle to detailed government regulation. The choice of policy instrument matters.

Canadians have watched this pendulum swing in real time. The federal government introduced the Artificial Intelligence and Data Act as part of Bill C-27 in June 2022. This was a bold legislative play. But when ChatGPT was released that November, generative AI hijacked the conversation. ISED reacted by pulling together the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, signed by companies operating in Canada like BlackBerry, Cohere, and IBM.

Where you stand depends on where you sit. Tech companies generally prefer self-regulation and voluntary standards. The typical arguments are that policy-makers don’t understand technology well enough to regulate it, and heavy-handed regulation slows innovation.

Non-profit advocates should understand and take a view on the appropriate degree and kind of government intervention.

It also depends on how long you’ve been sitting in your seat. Many lawmakers have lived through a failed experiment in self-governance by large technology companies. We’re all arguably victims of an ungoverned internet and social media platforms gone wild. Many policy-makers regret their hands-off approach. AI regulation is a mulligan – a chance to rein in this technology and its purveyors before the damage is done.

Many policy-makers fall somewhere in the middle, arguing for a combination of “hard” and “soft” law. This is where standards-setting comes in. Both the EU AI Act and President Joe Biden’s October 2023 executive order rely on standards bodies to do some of the governance heavy lifting. For example, the US National Institute of Standards and Technology has developed an AI Risk Management Framework and a related framework focused on generative AI.

Non-profit advocates should understand and take a view on the appropriate degree and kind of government intervention.

5. Horizontal versus vertical policy-making

Another tension is “horizontal” versus “vertical” policy-making for AI. Horizontal policies apply to all actors. Vertical policies distinguish between industries or sectors.

Canada has started down the path of horizontal policy-making with the Artificial Intelligence and Data Act. The EU AI Act takes the horizontal approach. But others break it down by sector. For example, President Biden’s 2023 executive order directed numerous federal agencies to take action in their specific regulatory spheres.

Advocates should look closely at how policies affect the sectors they care about and how they apply to the government actors with the greatest impact on their communities.

Part of this equation is the extent to which government regulates its own use of AI, versus focusing on the private sector. The Artificial Intelligence and Data Act does not apply to government itself. By contrast, Ontario’s Bill 194 applies only to public sector use of AI.

When responding to policy-making efforts, non-profit advocates should look closely at how policies affect the sectors they care about and how they apply to the government actors with the greatest impact on their communities.

6. Canadian exceptionalism versus international harmonization

Does Canada even need a “Canadian” AI policy? Many say no and are already predicting a “Brussels effect”: that the EU AI Act will become de facto international law. There is precent for this: the EU’s privacy law, the General Data Protection Regulation, has now become a global norm. Others point to Canada’s tight economic ties with the United States. In this scenario, more restrictive regulations in Canada will be criticized for raising compliance costs and driving businesses away, and calls for regulatory “harmonization” or “interoperability” will become louder.

Canada may then be well-suited to driving forward global governance efforts. The Hiroshima AI Process emerging from the Group of Seven is one such effort where Canada can play a meaningful role as it takes on presidency of the G7 in 2025.

Conclusion

What should a non-profit leader do? The key point: jump into the fray. Nobody else is going to represent your community’s unique perspective when policy is made. And when you are reviewing proposed policies, here are some questions you might consider:

  1. What is the objective (stated or implied)? Is it to seize benefits or mitigate risks or harms – and for whom?  
  2. What is the scope? How is AI being defined? Does it cover all sectors or only some? Does it apply to government’s own use of AI? How will your own communities be affected?
  3. How enforceable is this approach? How clearly defined are the requirements? Who is tasked with enforcement? To what extent does it rely on voluntary compliance or self-regulation?  
  4. How does it align with policy-making in other jurisdictions? Are there examples elsewhere that better suit your community’s needs?  

Policy-makers need to hear from you. Centring the needs of your communities and stakeholders is critical for building the AI future we deserve.

Subscribe

Weekly news & analysis

Staying current on the Canadian non-profit sector has never been easier

This field is for validation purposes and should be left unchanged.