Hostname: page-component-76c49bb84f-zv2rg Total loading time: 0 Render date: 2025-07-04T19:48:49.870Z Has data issue: false hasContentIssue false

Missed opportunities in AI regulation: lessons from Canada’s AI and data act

Published online by Cambridge University Press:  15 May 2025

Ana Brandusescu*
Affiliation:
Department of Geography, Faculty of Science, McGill University, Montreal, Canada
Renée E. Sieber
Affiliation:
Department of Geography, Faculty of Science, McGill University, Montreal, Canada
*
Corresponding author: Ana Brandusescu; Email: ana.brandusescu@mail.mcgill.ca

Abstract

We interrogate efforts to legislate artificial intelligence (AI) through Canada’s Artificial Intelligence and Data Act (AIDA) and argue it represents a series of missed opportunities that so delayed the Act that it died. We note how much of this bill was explicitly tied to economic development and implicitly tied to a narrow jurisdictional form of shared prosperity. Instead, we contend that the benefits of AI are not shared but disproportionately favour specific groups, in this case, the AI industry. This trend appears typical of many countries’ AI and data regulations, which tend to privilege the few, despite promises to favour the many. We discuss the origins of AIDA, drafted by Canada’s federal Department for Innovation Science and Economic Development (ISED). We then consider four problems: (1) AIDA relied on public trust in a digital and data economy; (2) ISED tried to both regulate and promote AI and data; (3) Public consultation was insufficient for AIDA; and (4) Workers’ rights in Canada and worldwide were excluded in AIDA. Without strong checks and balances built into regulation like AIDA, innovation will fail to deliver on its claims. We recommend the Canadian government and, by extension, other governments invest in an AI act that prioritises: (1) Accountability mechanisms and tools for the public and private sectors; (2) Robust workers’ rights in terms of data handling; and (3) Meaningful public participation in all stages of legislation. These policies are essential to countering wealth concentration in the industry, which would stifle progress and widespread economic growth.

Information

Type
Commentary
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Policy Significance Statement

With Canada’s most recent legislation on AI and data, we had the opportunity to examine the internals of Canada regulations and governance as it was drafted, contested and ultimately scrapped. AIDA’s origins in a federal economic development department allowed us to critique AI policy from the perspective of shared prosperity and show that the groundwork for governments to accelerate economic inclusion was missing. We combined not just impacts on the local population but also impacts on workers’ rights outside Canada. Our recommendations for future regulation are: implementing accountability mechanisms for public and private sectors, acknowledging the problems in the dual agendas of regulating and promoting AI and data, enforcing rights for all workers involved in the production of AI; and enabling pathways for meaningful public participation in AI policy.

1. Introduction

Many countries view regulations as a governance mechanism not to constrain artificial intelligence (AI) development and deployment but to promote innovations in AI. By supporting innovation and economic development, government officials and others argue that this governance-by-regulation will ensure all citizens gain from the benefits of AI. This sentiment plays out as national governments draft proposals to regulate AI. For instance, the United Kingdom promoted a pro-innovation approach to AI regulation: ‘making responsible innovation easier and reducing regulatory uncertainty … [that] will encourage investment in AI and support its adoption throughout the economy, creating jobs and helping us to do them more efficiently’ (UK Office for Artificial Intelligence, 2023). The European Union’s AI Act promises to ‘ensure that AI systems in the EU are safe, transparent, ethical, unbiased and under human control’ (European Parliament, 2023). In the development of its executive order on AI, the United States (US) asserted that ‘AI—if properly managed—can contribute enormously to the prosperity, equality and security of all’ (U.S. White House, 2023). The US described responsible AI as having ‘the potential to help solve urgent challenges while making our world more prosperous, productive, innovative and secure’ (U.S. White House, 2023). In these various locales, regulations directly have tied technological advances to capital development, which presumably induces inclusive progress across the broadest range of individuals.

Proponents of AI promise exponential benefits, from the optimisation of the energy grid, medical and scientific breakthroughs, to improvements in government service delivery and increased convenience of everyday life. We are sceptical that AI possesses such heightened power to generate positive impacts, as AI amplifies existing societal biases (e.g., Eubanks, Reference Eubanks2017; Noble, Reference Noble2018; Benjamin, Reference Benjamin2019). AI systems are often transjurisdictional in their reach and therefore AI companies may be averse to jurisdictional regulation. The hyperbole surrounding AI also is novel because it crosses partisan divides. Like other countries, Canada has rushed to invest in AI (Brandusescu, Reference Brandusescu2021) while appearing to ignore calls to protect intellectual property (IP) or address potential environmental degradation.

Despite proposed benefits from advances in AI or other innovations, historically, prosperity from these advances has not been shared widely, because ‘society’s approach to dividing the gains [was] pushed away from arrangements that primarily served a narrow elite’ (Johnson and Acemoglu, Reference Johnson and Acemoglu2023: 6). The Canadian Senate argued that historically economic development plans failed to boost economic opportunities for marginalised communities, and that transitioning to shared prosperity requires recognising this unsustainable economic imbalance and rethinking how governments, businesses and organisations engage with citizens for broader access and equity (Senate of Canada, 2021). AI Now Institute’s Kak and Myers West (Reference Kak and Myers West2024) caution against a future where, ‘the concentration of AI-related resources and expertise within the private sector makes it almost inevitable that all roads (and explicitly, profits and revenue) will lead back to industry’.

Canada’s proposed and now failed AI legislation offers a case study in the rhetorical collision of economic development and societal concerns of Canadians. We contend that AI innovation will not deliver broad-based economic development and inclusive benefits. What follows is a brief overview of the origins of Canada’s AI and Data Act (AIDA). We argue that several opportunities were missed during the drafting of AIDA. These missing elements led to extensive critiques, delays and the ultimate downfall of AIDA, when the Canadian Parliament was prorogued (As of January 6, 2025, Canada’s AI and Data Act (AIDA) is no longer under review as parliament is prorogued, which means that all draft bills have died.).

To remedy AIDA’s missed opportunities, we recommend the Canadian government and, by extension, other governments, invest in a future AI act that prioritises: (1) accountability mechanisms and tools for the public and private sectors; (2) robust workers’ rights in terms of data handling; and (3) meaningful public participation in all stages of legislation. These critiques and recommendations have implications for Canada and the world.

2. The origins of Canada’s AIDA

The Canadian governmental department called Innovation, Science and Economic Development Canada (ISED) was tasked to draft a new digital and data strategy that evolved into the Digital Charter (Shade, Reference Shade2019) (It is important to note that ISED was renamed from Industry Canada in 2015, with the goal of enhancing the country’s productivity and competitiveness within a global economy, thereby contributing to the economic and social welfare of Canadians.). From its inception in 2016, the Digital Charter was driven by ISED’s goals of economic development vis-à-vis innovation. In addition to promotion, the department formulates regulations and compliance mechanisms (Innovation Science and Economic Development Canada, 2019a). The Digital Charter is considered an aspirational document, which borrows from and must comply with the Charter of Rights and Freedom (Department of Justice Canada, 1982). Overall, the Digital Charter presumes these ‘series of commitments [will] build… trust in the digital and data economy’ (Scassa, Reference Scassa2023). Consequently, the Digital Charter included a broad range of entities like the Canadian Statistics Advisory Council as well as initiatives like Protecting Democracy, Christchurch Call to Action, and Computers for Schools (Figure 1).

Figure 1. ISED’s ministerial engagements with Canadians for the Digital Charter.

The Digital Charter also has roots in calls to modernise Canada’s consumer and personal privacy laws in response to big data and technologies like AI. An example is in the 2017 statement on ‘the need for a modern privacy and data protection regime, the value of data trusts and the need for compatibility with the EU’s General Data Protection Regulation [GDPR]’ (Innovation Science and Economic Development Canada, 2019b). Like many countries, ISED looked to the EU GDPR as the gold standard regulation for data protection and privacy. A related initiative was the call in the Digital Charter to revise the protection of IP. In 2018, ISED launched a new strategy to ‘help Canadian entrepreneurs better understand and protect their IP, and get better access to shared IP’ (Innovation Science and Economic Development Canada, 2019b). Thus data protection, (consumer) privacy and more broadly the realm of the digital became fundamental to economic development in Canada and to ISED, where ‘the data-driven economy represents limitless opportunities to improve the lives of Canadians—from producing faster diagnoses to making farming more efficient’ (Innovation Science and Economic Development Canada, 2019c).

Between June and September 2018, the Digital Charter moved from an initial draft to public consultations, which were structured on ISED’s vision to ‘make Canada a nation of innovators’ (Innovation Science and Economic Development Canada, 2019b). ISED’s Minister and the six Digital Innovation Leaders he selected, conducted 30 public roundtable discussions with ‘business leaders, innovators and entrepreneurs, academia, women, youth, Indigenous peoples, provincial and territorial governments and all Canadians’ (Innovation Science and Economic Development Canada, 2019b). A total of 580 participants attended a four-month consultation process, which ISED distilled into three categories: skills and talent; unleashing innovation; and privacy and trust. Of note, ‘rapid acceleration of data being created, and its use as a commodity means Canada must re-evaluate the [marketplace] frameworks’ (Innovation Science and Economic Development Canada, 2019b). The overall sentiment was that Canada could fall behind other nations in digital innovation. It was imperative that Canada become a risk-taker to close the productivity gap with emerging technologies and ‘leverage them as a competitive advantage’ (Ibid). Regulations were paramount and urgent but those regulations should be championed through market-led competition, data-driven innovation and the digital economy.

ISED consultations on the Digital Charter occurred 4 years before the inclusion of AI in any federal bill. Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts was introduced in June 2022. The name was shortened to the Digital Charter Implementation Act, 2022 (Department of Justice Canada, 2022). The bill’s inclusion of AI and data formalised in AIDA, came as a surprise to Canada’s AI researcher and practitioner community. Since then, Bill C-27 came under significant critique, with many arguing that AIDA should be severed or removed entirely from the bill (Clement, Reference Clement2023a). To address certain critiques, the Minister of ISED unexpectedly introduced a series of oral amendments to Bill C-27 during the initial meeting before Members of Parliament. AIDA went through multiple meetings, which were the result of numerous critiques. The Bill fell when Parliament was suspended. In the next section, we discuss our pressing issues with AIDA, which will persist in future AI regulation, including the future of the Digital Charter.

3. The problems with AIDA

3.1 AIDA relied on public trust in a digital and data-driven economy

The preamble of Bill C-27 states that ‘trust in the digital and data-driven economy is key to ensuring its growth and fostering a more inclusive and prosperous Canada’ (Parliament of Canada, 2022). However, there is no guarantee that this trust will result in shared prosperity. Research has found that regions experiencing data poverty, data deserts and data divides fail to benefit from technological innovations that foster data growth and access (e.g., Leidig and Teeuw, Reference Leidig and Teeuw2015). Data growth can just as easily equal mass data surveillance, which adds opportunities to monetise data. Data-growth-as-surveillance ‘socially sorts people’ (Surveillance Study Centre, 2022) and is valuable for advertising and data brokers (Zuboff, Reference Zuboff2019; Lamdan, Reference Lamdan2022) but hardly generates income for individuals who are classified by industry and government. This asymmetry in the data economy between the public who is both contributor and user versus industry and government enables distrust. Considerable research has identified a public distrust of AI, tracking government and companies’ collection of intimate ‘online personal data traces and biometric and body data’ for surveillance purposes (Kalluri et al., Reference Kalluri, Agnew, Cheng, Owens, Soldaini and Birhane2023) and the racial bias in facial recognition technology used in predictive policing (Lum and Isaac, Reference Lum and Isaac2016; Buolamwini and Gebru, Reference Buolamwini and Gebru2018).

Polls have shown that Canadians are highly distrustful of AI, more so than other countries (Edelman Trust Barometer, 2024). In Canada, 54 percent reject AI innovation, compared with the 17 percent who embrace it. Segments of the population have legitimate reasons to distrust the impacts of AI. Marginalised and low-income people are likely to be subject to the negatives of AI: job losses due to automation, enhanced surveillance, amplified discrimination and increased digital divides. These concerns do not prevent governments from leveraging public trust as a justification for pursuing AI innovation (Sieber et al., Reference Sieber, Brandusescu, Adu-Daako and Sangiambut2024a: 7).

Trust and safety have become central to AI, marked by the emergence of government-funded AI safety institutes worldwide. Confident in Bill C-27’s passage, ISED announced the Canadian AI Safety Institute in November 2024 to further AI’s responsible development (Innovation Science and Economic Development Canada, 2024). Safety emphasises technical risks like system performance over societal harms such as inequality, privacy violations or environmental impact. Similarly, trust is often reduced to public acceptance that safety will be inclusive (Sieber et al., Reference Sieber, Brandusescu, Adu-Daako and Sangiambut2024a). This framing risks legitimising technical robustness without adequately addressing broader societal safeguards.

3.2 ISED tried to both regulate and promote AI and data

It has long been recognised that government departments and agencies mandated to both promote and regulate face irreconcilable conflicts. We see this in ISED, which is mandated to regulate innovations, for example through allocating government funding, granting licenses and setting standards. ISED’s mandate also is to ‘make Canadian industry more productive and competitive in the global economy’ (Innovation Science and Economic Development Canada, 2019a), which places it squarely in the role of promotion. These dual roles and responsibilities are often found to be incompatible and inevitably favour promotion over the accountability of AI development and deployment.

We argue that promotion is facilitated by a sense of urgency in technology adoption and deployment; hence regulation may be implemented in haste. AI is often framed as an arms race for development and talent (Taddeo and Floridi, Reference Taddeo and Floridi2021). This is especially important for mid-tier economies like Canada’s, struggling to find a place in the geopolitics of a fourth industrial revolution (Walker and Alonso, Reference Walker and Alonso2016). ISED invoked an ‘agile’ and hasty approach to implementing the Digital Charter (Scassa, Reference Scassa2023). In responding to ISED’s approach, Wylie (Reference Wylie2023) argues that Canadian society must be safeguarded against a ‘move fast and break how we make laws in democracy’ regulatory attitude that fast-tracks a type of economic growth that invariably concentrates wealth.

Nuclear regulatory agencies serve as the key case study of the failure of departments and agencies with dual roles of promotion and regulation (Walker and Wellock, Reference Walker and Wellock2010). Cha (Reference Cha2024) draws parallels between nuclear power regulation and safety regulations for AI, noting how the International Atomic Energy Agency recommends independence in assessing safety, setting standards and ensuring compliance. The analogy to nuclear power is further important because of its transjurisdictional impact. Nuclear power regulation contains safeguards for the hardware, software and data components that policymakers could adopt (U.S. Nuclear Regulatory Commission, 2024: 17), although AI firms and labs have ‘pushed back’ against policymakers to delay or weaken such AI regulation (Khlaaf, Reference Khlaaf2023). We worry that these dual roles advance current regulatory capture around AI (Abdalla and Abdalla, Reference Abdalla and Abdalla2021; Whittaker, Reference Whittaker2021; Wei et al., Reference Wei, Ezell, Gabrieli and Deshpande2024). In particular, Wei et al. (Reference Wei, Ezell, Gabrieli and Deshpande2024: 1539) found that the capture led ‘to a lack of AI regulation, weak regulation or regulation that over-emphasise[d] certain policy goals over others’.

The problem is further underlined in the AI regulatory responsibilities drafted by ISED, which were defined as the domain of a new AI and Data Commissioner. The envisioned commissioner would have been appointed by the Minister of ISED to support the Minister in overseeing the functions of AIDA. Furthermore, the Commissioner would have, at the discretion of the Minister, been granted the authority, responsibility and functions required to enforce AIDA. In essence, the Commissioner’s capacity for independent oversight was contingent upon the Minister’s discretion, resulting in diminished autonomous enforcement powers under AIDA (Tessono et al., Reference Tessono, Stevens, Malik, Solomun, Dwivedi and Andrey2022; Witzel, Reference Witzel2022). OpenMedia (2023) assembled experts who noted the deficiencies in oversight and called for greater independence, which would obviate the dual roles. This arrangement consolidated a significant level of authority within a single department (Ifill, Reference Ifill2022), for example, to ‘intervene if necessary to ensure that AI systems are safe and non-discriminatory’ (Parliament of Canada, 2022) and thus would be subject to ISED’s conflicting goals of promotion and regulation.

3.3 Public consultation was insufficient for AIDA

Researchers and practitioners have extensively documented the insufficiency of public consultation on AIDA. Before the tabling of AIDA in June 2022, no public consultations were conducted (Tessono et al., Reference Tessono, Stevens, Malik, Solomun, Dwivedi and Andrey2022; Attard-Frost, Reference Attard-Frost2023). Since then, ISED hosted over 300 invite-only meetings on Bill C-27, where only nine of the meetings were held with members of civil society (Clement, Reference Clement2023b). A lack of transparency has denied the public the ability to scrutinise the ‘who’ and ‘how’ of AIDA’s construction. Public consultation is important to increase trust and lower scepticism in AI. Equally important, democratic governance demands that the government heeds concerns about the impacts of AI on the economy and society.

With its increasing ubiquity, generative AI (GenAI) has the potential to accelerate the concentration of power in Big Tech and the AI industry, manifesting in a seamless dominance over numerous sectors (Guo, Reference Guo2024). The vast majority of GenAI models are not open; they are owned by private sector firms (Widder et al., Reference Widder, Meyers West and Whittaker2024) where a company can quickly constrict access or disable the millions of apps built atop the models. Because of energy and computing demands, producers of AI, especially those of GenAI models have become de facto monopolies (Whittaker, Reference Whittaker2021). These firms will not be in Canada and will benefit neither firms nor Canadian employees but be concentrated within a few American firms (Standing Committee on Industry and Technology Canada, 2024). This is another reason why Canadian public consultation on AI is paramount.

The Canadian Senate report (2021) noted a historic lack of shared prosperity enjoyed by rural and remote communities. Engaging a broad range of the public means confronting unequal harms caused by AI, which are skewed by race, gender and geography (Benjamin, Reference Benjamin2019). Indeed, experts are increasingly unable to anticipate potential harms due to the sheer volume of accumulated intimate data about individuals; consumers could be sold products before even expressing a desire to purchase them (Chaudhary and Penn, Reference Chaudhary and Penn2024).

3.4 Workers’ rights in Canada and worldwide were excluded in AIDA

According to Canada’s Labour Market Information Council, ‘Canada risks ceding an important piece of its sovereignty if it does not control the technology used to gather and analyse essential workplace data’ (Bergamini, Reference Bergamini2023). Canadian workers’ rights were not addressed in the original draft or the amendments of AIDA (Attard-Frost, Reference Attard-Frost2023). Researchers have extensively documented the human cost of AI systems on workers, whether in Canada or worldwide (Arsht and Etcovitch, Reference Arsht and Etcovitch2018; Gray and Suri, Reference Gray and Suri2019; Roberts, Reference Roberts2019). One shortcoming of AI legislation in addressing societal harms is the failure to account for extra-jurisdictional impacts, despite AI’s significant externalities. In low-and-middle income countries (LMICs), content moderation work for Meta (Facebook), OpenAI and TikTok has been found to traumatise workers (The Bureau of Investigative Journalism, 2022; Perrigo, Reference Perrigo2023a), in one case leading to a content moderator committing suicide after being refused a transfer (Parks, Reference Parks2019). Warehouse workers for Amazon have been intensely surveilled by the company’s ‘time off task’ feature where minute by minute the managers could monitor the duration employees spent in the restroom during a specific period (Gurley, Reference Gurley2022). Uber has partaken in algorithmic wage discrimination, in which wages are ‘calculated with ever-changing formulas using granular data on location, individual behaviour, demand, supply and other factors’ (Dubal, Reference Dubal2023: 1935). Instead of sharing in the prosperity promised by AI systems, AI has contributed to dehumanising workers (Williams et al., Reference Williams, Miceli and Gebru2022).

Canada is hardly unique in failing to address workers’ rights; the US has failed to centre labour in AI regulations, despite fear of job displacement due to AI deployment (Kak and Myers West, Reference Kak and Myers West2024). More importantly, AIDA’s response to an AI system which violates workers’ rights, delegates worker protection to the responsibility of the developer and provides minimal oversight, ‘to provide guidance or make recommendations regarding corrective action’ (Office of the Minister of Innovation Science and Industry Canada, 2023). ISED’s words created the illusion of responsiveness while avoiding accountability.

The number of gig workers has rapidly increased and, with or without AI, it remains challenging to enact sufficient labour protections for those workers (De Stefano, Reference De Stefano2016; Mateescu, Reference Mateescu2024). Any wording in AIDA covering employer-employee relations neither failed to include gig workers like freelance copywriters, artists and software developers in Canada nor, for example, customer service agents in the Philippines (Deck, Reference Deck2023). Worker protection is paramount since firms, which are predominantly price sensitive, already use gig workers for less expensive labour—off-shoring or, in Canada’s case, near-shoring (Canada is frequently a site of near-shoring for the US. Canadian creative workers are often goto’s for Hollywood because they offer excellent and less expensive alternatives [e.g., in visual effects].). Consequently, gig workers are situated in the most precarious labour positions.

Not only may workers lose jobs but also their creative works—their IP—is being harvested to train GenAI systems without apparent legislative repercussions. IP strategy has been a weak point of government regulation in Canada (Brandusescu et al., Reference Brandusescu, Cutean, Dawson, Davidson, Matthews and O’Neill2021) with no mention of copyright in the original draft of AIDA (Attard-Frost, Reference Attard-Frost2023) or its amendments. Canada’s long-standing use of regulations to protect ‘CanCon’ (Canadian creative content) (Mejaski, Reference Mejaski2011) contrasts with the challenges posed by GenAI’s parasitic use of IP. The disregard for IP evinced in AIDA undermines Canada’s historic efforts, such as Bill C-18 (Online News Act), which led to Google and Meta blocking Canadian news from their platforms. The revised national IP strategy, led by ISED since 2018, focuses on education and guidelines, not IP protection (Innovation Science and Economic Development Canada, 2023), even though protecting IP is part of the department’s mandate. Relaxing or ignoring the protections of individuals’ IP rights protects the firms developing GenAI.

4. Recommendations for a future AI sct

4.1 Accountability mechanisms and tools for the public and private sectors

We argue that AIDA necessitated a redraft led by departments and agencies outside of ISED, several of which already have established accountability measures (Treasury Board of Canada Secretariat, 2019). Likewise, ombuds-type monitoring of compliance with AI legislation requires a separate government body from the one charged with regulation, free of undue influence by the public and private sectors (OpenMedia, 2023). Mechanisms should be developed to address conflicts of interest amongst those involved in the process. As we have learned from nuclear regulatory bodies, the supervisory authority should not be involved in both commercial and regulatory aspects of AI systems (Witzel, Reference Witzel2022).

Accountability tools can take several forms. We recommend third-party audits of AI systems (Costanza-Chock et al., Reference Costanza-Chock, Raji and Buolamwini2022), which offer an independent arms-length assessment of an AI system. The auditor is given access to the internals of the AI system to stress-test the system with alternate data. The goal is an impartial evaluation of compliance, performance, quality or adherence to specific standards, regulations or requirements. These audits can reveal harms and guide meaningful accountability (and regulations) for the government and protection of human rights (Tessono et al., Reference Tessono, Stevens, Malik, Solomun, Dwivedi and Andrey2022; Tessono and Solomun, Reference Tessono and Solomun2022). To finance independent AI audits, we propose government establish an AI trust into which companies transfer a portion of their profits. Finance bonds would guarantee the company can compensate for harms, similar to environmental bonds, for example, which ensure the remediation of tailings from mining (Aghakazemjourabbaf and Insley, Reference Aghakazemjourabbaf and Insley2021).

Tracking the role of the private sector vis-a-vis AI legislation supports public sector values of transparency and accountability. Johnson and Acemoglu (Reference Johnson and Acemoglu2023) argue for transparency of agreements brokered among lobbyists, politicians and companies since exorbitant sums are invested in lobbying in high-income countries. The government should reveal instances of corporate lobbying, so the public can identify potential conflicts of interest between the government and companies regarding AI. Corporate lobbying practices were exempt in Canada’s AIDA (Ifill, Reference Ifill2022). However, Canada has a lobbying registry system that could include a range of new activities, such as ‘polling, monitoring and attending committees, offering strategic advice and hosting events” as well as players like consultants who play an outsized role in AI services and are active actors in lobbying on AI (Beretta, Reference Beretta, Dubois and Martin-Bariteau2020: 138).

Reducing Big Tech’s monopolistic power also is key for the redistribution of AI’s economic benefits. Accountability approaches that champion the increase of government subsidies and financial redress mechanisms like tax reform in AI regulations also can prove economically beneficial to a broader segment of the population. Government can impose taxes on AI and tech companies, beyond Big Tech. Because of AI’s ability to damage IP, governments can demand technical fixes like removing IP-protected data from training models. Another innovative regulatory approach considers using the economic power of the GenAI foundation model makers to transform certain AI companies and their models into public utilities (Vipra and Korinek, Reference Vipra and Korinek2023). Here the costs would be heavily controlled with equal access and transparency guaranteed.

4.2 Robust workers’ rights in terms of data handling

One unique aspect of AI is that most AI systems rely on massive amounts of data to train AI models before systems can be used. If developers do not wish the models to learn from toxic images, audio and text, content needs to be cleaned—‘moderated’—of toxicity. Shared prosperity and a sustainable AI economy demand that workers’ rights be integrated. These rely on bolstering the voice of workers in the development and the use of AI systems. Canada can set an example worldwide to improve data worker conditions, especially in LMICs.

In Kenya, data workers sued Meta, the parent company of Facebook through their work with Sama (Perrigo, Reference Perrigo2022). This lawsuit is one of the first against Big Tech outside the West (Sambuli, Reference Sambuli2022). Kenyan data workers also are unionising for substantial worker protections like better pay and working conditions, including mental health support (Perrigo, Reference Perrigo2023a) and launched a second lawsuit against Big Tech companies, this time Meta, OpenAI and TikTok (Perrigo, Reference Perrigo2023b). A Western government should recognise the negative externalities amassed in LMICs and prevent AI and data analytics companies from operating, in this case in Canada, if they partake in human rights abuses worldwide (Brandusescu, Reference Brandusescu2021). Additionally, an AI act should bolster the ability of workers to engage in class actions. American workers from the Writers Guild of America (WGA) and the actors’ union’s strike demands focused heavily on the negative impacts of AI. In the end, the WGA prevented production companies from deciding when they could use and not use AI (Foroohar, Reference Foroohar2023).

Nearly everyone contributes data that make AI systems work. The public can create a new type of union called a data union (Johnson and Acemoglu, Reference Johnson and Acemoglu2023). Within the data union, data stewards would simultaneously protect the rights of workers and citizens. Workers’ Councils on AI offer another promising avenue toward worker control over AI use (McQuillan, Reference McQuillan2022). An AI act can encourage data unions and workers’ councils as options and reduce any barriers to their formation.

4.3 Meaningful public participation in all stages of legislation

To ensure shared prosperity now and not just in an indeterminate future, an AI act must include a right to public participation in the choices to develop and use AI systems (McCrory, Reference McCrory2024). Even though a dominant narrative of shared prosperity prioritises the customer experience through productivity gains (Ben-Ishai et al., Reference Ben-Ishai, Dean, Manyika, Porat, Varian and Walker2024), this will not adequately substitute for the loss of a job, a wrongful arrest, or another harm caused by AI. Government should not reduce the public to consumers’ responsiveness to a market economy but instead remember its responsibility to protect its citizenry and ensure distributive prosperity. Government can do so by applying policy protections within an AI act that go well beyond an individual’s privacy or consumer protections.

Numerous types of meaningful participation in AI have been ‘field-tested’, from which government can choose, among them citizens’ juries, permanent mini-publics and citizens’ assemblies (Balaram et al., Reference Balaram, Greenham and Leonard2018; Ada Lovelace Institute, 2021; Data Justice Lab, 2021; Sieber et al., Reference Sieber, Brandusescu, Adu-Daako and Sangiambut2024a, Reference Sieber, Brandusescu, Sangiambut and Adu-Daako2024b). Simultaneously, meaningful participation in AI must accommodate durable issues in participation, such as who is able to participate and how? Digital data tends to favour dominant and privileged populations, not refugees, migrants or stateless people. For us, the public represents impacted individuals and groups as well as the general public. Therefore, we extend participation beyond those who are directly impacted by an AI system or to tech executives but to numerous publics (Raji, Reference Raji2023). A right to participation should occur as early as possible in the development stage of AI systems (e.g., participatory design) (Mathewson, Reference Mathewson2023).

In Canada, AI governance and data policy spaces have reduced meaningful participation to consultations like multistakeholder forums. Their agendas are often shaped by money; meaningful participation is resource-intensive and can exclude those with limited funds and time (Sambuli, Reference Sambuli2021). Government must equalise the significant resource differences among stakeholders in their participation (Sieber et al., Reference Sieber, Brandusescu, Adu-Daako and Sangiambut2024a, Reference Sieber, Brandusescu, Sangiambut and Adu-Daako2024b), for instance compensating members of citizens’ assemblies. Despite the delays implied by broad participation, the breadth provides the richest, most nuanced solutions that articulate current harms and anticipate new ones. It also is cost-effective (European AI and Society Fund, 2023).

Because of AI’s ubiquity in communication technologies, we should ensure old forms of participation are protected as well as improved, to allow for a range of voices to express assent and dissent. Nurturing a culture of meaningful participation means having the choice to reject GenAI to summarise activities as this further trains generative models with participants’ content. Meaningful participation requires the ability of participants to exert significant influence on policies and products, including public influence on decisions regarding the banning, sunsetting and decommissioning of AI systems, such as autonomous weapons or facial recognition.

5. Concluding remarks

AIDA was delayed because of consistent criticism from civil society, businesses and academia, even as its passage was strongly supported by the Liberal Party. Since its launch, calls to withdraw AIDA entirely from Bill C-27 were still being made by multiple stakeholders (Canadian Civil Liberties Association, 2024). During the revision of this article, Bill C-27 and 25 other bills, effectively died with the prorogation of Parliament, making the future of AI regulation uncertain (Osman and Reevely, Reference Osman and Reevely2025), even though the other political parties and industry support individual privacy and consumer protection parts of the Bill (Mazereeuw, Reference Mazereeuw2025).

We note calls for self-regulation of AI; as others have argued, the private sector should not be relied upon to rein in AI (Ferretti, Reference Ferretti2022; Wheeler, Reference Wheeler2023; McCrory, Reference McCrory2024). Governments have a role in regulating AI to ensure that accountability is not subordinate to market incentives, which could include worker exploitation. AI legislation also requires a bottom-up approach, in which data can be leveraged through unions. These and other inclusive policy instruments should be deliberated within a transparent and democratic process.

Participating publics need not be technical experts; they bring their own lived experience and expertise to the AI regulatory discourse. Nor does the public need to be persuaded of the promised benefits of AI or compelled to trust AI as a prerequisite to participation. The government has a role in regulating AI for its potentially harmful effects on society and achieving shared prosperity for all.

Data availability statement

All resources used are included in the references. Where they are available online, a link is provided.

Acknowledgments

The authors are grateful for reviewers’ comments in helping refine the commentary.

Author contribution

Both authors have contributed to the conceptualisation, data curation, formal analysis, methodology, project administration, visualisation, writing—original draft, writing—review and editing and approved the final submitted draft.

Funding statement

This research was supported by a Canada Graduate Scholar Doctoral program grant from the ‘Social Sciences and Humanities Research Council of Canada’ (‘767-2021-0252’). The funder had no role in study design, data collection and analysis, the decision to publish or preparation of the manuscript.

Competing interests

The authors declare none.

References

Abdalla, M and Abdalla, M (2021) The grey hoodie project: Big tobacco, big tech, and the threat on academic integrity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM, pp. 287297.Google Scholar
Ada Lovelace Institute (2021) Participatory data stewardship. Ada Lovelace Institute, September 2021. Available at https://www.adalovelaceinstitute.org/report/participatory-data-stewardship.Google Scholar
Aghakazemjourabbaf, S and Insley, M (2021) Leaving your tailings behind: Environmental bonds, bankruptcy and waste cleanup. Resource and Energy Economics 65, 101246.Google Scholar
Arsht, A and Etcovitch, D (2018) The human cost of online content moderation. Harvard Journal of Law and Technology, March 2, 2018. Available at https://jolt.law.harvard.edu/digest/the-human-cost-of-online-content-moderation.Google Scholar
Attard-Frost, B (2023) Generative AI systems: Impacts on artists & creators and related gaps in the Artificial Intelligence and Data Act. Submission to the Standing Committee on Industry and Technology on Bill C-27, June 5, 2023. Available at https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12541028/br-external/AttardFrostBlair-e.pdf.Google Scholar
Balaram, B, Greenham, T and Leonard, J (2018) Artificial intelligence: Real public engagement. Royal Society of Arts, May 2018. Available at https://www.thersa.org/reports/artificial-intelligence-real-public-engagement.Google Scholar
Ben-Ishai, G, Dean, J, Manyika, J, Porat, R, Varian, H and Walker, K (2024) AI and the opportunity for shared prosperity: Lessons from the history of technology and the economy. Available at https://arxiv.org/abs/2401.09718.Google Scholar
Benjamin, R (2019) Race after Technology: Abolitionist Tools for the New Jim Code. New York: Polity.Google Scholar
Beretta, M (2020) Influencing the internet: Lobbyists and interest groups’ impact on digital rights in Canada. In Dubois, E and Martin-Bariteau, F (eds), Citizenship in a Connected. Canada: A Research and Policy Agenda, University of Ottawa Press, Ottawa.Google Scholar
Bergamini, M (2023) Bergamini: Canada must control AI technology that gathers and analyzes workplace data. Ottawa Citizen, August 2, 2023. Available at https://ottawacitizen.com/opinion/columnists/bergamini-canada-must-control-ai-technology-that-gathers-and-analyzes-workplace-data.Google Scholar
Brandusescu, A (2021) Artificial intelligence policy and funding in Canada: Public investments, private interests. Centre for Interdisciplinary Research on Montreal, McGill University, March 2021. Available at https://www.mcgill.ca/centre-montreal/files/centre-montreal/aipolicyandfunding_report_updated_mar5.pdf.Google Scholar
Brandusescu, A, Cutean, A, Dawson, P, Davidson, R, Matthews, M and O’Neill, K (2021) Maximizing strengths and spearheading opportunity: An industrial strategy for Canadian AI. Information & Communications Technology Council, September 2021. Available at https://www.ictc-ctic.ca/wp-content/uploads/2021/09/Maximizing-Strength-and-Spearheading-Opportunity.pdf.Google Scholar
Buolamwini, J and Gebru, T (2018) Gender shades: Intersectional accuracy disparities in commercial gender classification. In proceedings of the 1st conference on fairness, accountability and transparency, PMLR 81, 7791. https://proceedings.mlr.press/v81/buolamwini18a.html.Google Scholar
Canadian Civil Liberties Association (2024) CCLA joins call from civil society to withdraw AIDA from Bill C-27. CCLA, April 25, 2024. Available at https://ccla.org/privacy/ccla-joins-call-from-civil-society-to-withdraw-aida-from-bill-c-27.Google Scholar
Cha, S (2024) Towards an international regulatory framework for AI safety: Lessons from the IAEA’S nuclear safety regulations. Humanities and Social Sciences Communications 11, 506. https://doi.org/10.1057/s41599-024-03017-1.Google Scholar
Chaudhary, Y and Penn, J (2024) Beware the intention economy: Collection and commodification of intent via large language models. Special Issue 5: Future Shock: Grappling with the generative AI revolution. Harvard Data Science Review. Available at https://hdsr.mitpress.mit.edu/pub/ujvharkk/release/1.Google Scholar
Clement, A (2023a) No AIDA is better than this AIDA. Canada should craft an ‘agile’ AI regulatory regime, but not short-change democratic deliberation to pass an ill-conceived bill. Submission to the Standing Committee on Industry and Technology, November 8, 2023. Available at https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12743452/br-external/ClementAndrew-e.pdf.Google Scholar
Clement, A (2023b) Preliminary analysis of ISED’s C-27 list of 300 stakeholder consultation meetings. SSRN. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4658004.Google Scholar
Costanza-Chock, S, Raji, ID and Buolamwini, J (2022) Who audits the auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, pp. 15711583.Google Scholar
Data Justice Lab (2021) Advancing civic participation in algorithmic decision-making: A guidebook for the public sector. Data Justice Lab, June 2021. Available at https://datajusticelab.org/wp-content/uploads/2021/06/PublicSectorToolkit_english.pdf.Google Scholar
De Stefano, V (2016) The rise of the “just-in-time workforce”: On-demand work, crowdwork, and labor protection in the “gig economy”. Comparative Labor Law & Policy Journal 37(3), 471504.Google Scholar
Deck, A (2023) The workers at the frontlines of the AI revolution. Rest of World, July 11, 2023. Available at https://restofworld.org/2023/ai-revolution-outsourced-workers.Google Scholar
Department of Justice Canada (1982) The Canadian Charter of Rights and Freedoms. Available at https://www.justice.gc.ca/eng/csj-sjc/rfc-dlc/ccrf-ccdl/.Google Scholar
Department of Justice Canada (2022) Bill C-27: An Act to Enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to Make Consequential and Related Amendments to Other Acts. Available at https://ops.straive.com/eProofingCUP/ExecuteProofHandler.ashx?ExecuteAction=ProofLoad&token=JU2ktJ4N76SPI_PlusoheaIPyot0w&ChapterOrArticleOrBook=Article&Redirect=yes#https://www.justice.gc.ca/eng/csj-sjc/pl/charter-charte/c27_1.html.Google Scholar
Dubal, V (2023) On algorithmic wage discrimination. Columbia Law Review 123, 19291992. https://columbialawreview.org/wp-content/uploads/2023/11/Dubal-On_Algorithmic_Wage_discrimination.pdf.Google Scholar
Edelman Trust Barometer (2024) 2024 Edelman trust barometer: Canada report. Available at https://www.edelman.ca/sites/g/files/aatuss376/files/2024-03/2024%20Edelman%20Trust%20Barometer_Canada%20Report_EN_0.pdf.Google Scholar
Eubanks, V (2017) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.Google Scholar
European Artificial Intelligence & Society Fund (2023) Making the AI Act work: How civil society can ensure Europe’s new regulation serves people & society. October 2023, European Artificial Intelligence & Society Fund. Available at https://europeanaifund.org/newspublications/report-making-the-ai-act-work-how-civil-society-can-ensure-europes-new-regulation-serves-people-society/.Google Scholar
European Parliament (2023) EU AI Act: First Regulation on Artificial Intelligence. Available at https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.Google Scholar
Ferretti, T (2022) An institutionalist approach to AI ethics: Justifying the priority of government regulation over self-regulation. Moral Philosophy and Politics 9(2), 239265.Google Scholar
Foroohar, R (2023) Workers could be the ones to regulate AI. Financial Times, October 1, 2023. Available at https://www.ft.com/content/edd17fbc-b0aa-4d96-b7ec-382394d7c4f3.Google Scholar
Gray, ML and Suri, S (2019) Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. New York: Eamon Dolan Books.Google Scholar
Guo, E (2024) Inside Clear’s ambitions to manage your identity beyond the airport. MIT Technology Review, November 20, 2024. Available at https://www.technologyreview.com/2024/11/20/1107002/clear-airport-identity-management-biometrics-facial-recognition.Google Scholar
Gurley, LK (2022) Internal documents show Amazon’s dystopian system for tracking workers every minute of their shifts. VICE, June 2, 2022. Available at https://www.vice.com/en/article/5dgn73/internal-documents-show-amazons-dystopian-system-for-tracking-workers-every-minute-of-their-shifts.Google Scholar
Ifill, E (2022) The problems with the federal data-privacy bill will disproportionately hurt marginalized Canadians. The Globe and Mail, July 4, 2022. Available at https://www.theglobeandmail.com/opinion/article-the-problems-with-the-federal-data-privacy-bill-will/.Google Scholar
Innovation Science and Economic Development Canada (2019a) Policy on Providing Guidance on Regulatory Requirements. Available at https://ised-isde.canada.ca/site/acts-regulations/en/policy-providing-guidance-regulatory-requirements.Google Scholar
Innovation Science and Economic Development Canada (2019b) Canada’s Digital Charter in Action: A Plan by Canadians, for Canadians. Available at https://ised-isde.canada.ca/site/innovation-better-canada/en/canadas-digital-charter/canadas-digital-and-data-strategy.Google Scholar
Innovation Science and Economic Development Canada (2019c) Minister Bains announces Canada’s Digital Charter. Available at https://www.canada.ca/en/innovation-science-economic-development/news/2019/05/minister-bains-announces-canadas-digital-charter.html.Google Scholar
Innovation Science and Economic Development Canada (2023) Appendix A: The National IP Strategy Logic Model. Evaluation of the National Intellectual Property Strategy. Available at https://ised-isde.canada.ca/site/audits-evaluations/en/evaluation/evaluation-national-intellectual-property-strategy#s5.1Google Scholar
Innovation Science and Economic Development Canada (2024) Canadian Artificial Intelligence Safety Institute. Available at https://ised-isde.canada.ca/site/ised/en/canadian-artificial-intelligence-safety-institute.Google Scholar
Johnson, S and Acemoglu, D (2023) Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity. London: Hachette.Google Scholar
Kak, A and Myers West, S (2024) A modern industrial strategy for AI?: Interrogating the US approach. In AI Nationalism(s). AI Now Institute. Available at https://ainowinstitute.org/publication/a-modern-industrial-strategy-for-aiinterrogating-the-us-approach.Google Scholar
Kalluri, PR, Agnew, W, Cheng, M, Owens, K, Soldaini, L and Birhane, A (2023) The surveillance AI pipeline. Preprint. Available at https://arxiv.org/abs/2309.15084.Google Scholar
Khlaaf, H (2023) How AI can be regulated like nuclear energy. TIME. Available at https://time.com/6327635/ai-needs-to-be-regulated-like-nuclear-weapons/.Google Scholar
Lamdan, S (2022) Data Cartels: The Companies that Control and Monopolize our Information. Redwood City: Stanford University Press.Google Scholar
Leidig, M and Teeuw, RM (2015) Quantifying and mapping global data poverty. PLoS One 10(11), e0142076. 10.1371/journal.pone.0142076.Google ScholarPubMed
Lum, K and Isaac, W (2016) To predict and serve? Significance 13(5), 1419. https://doi.org/10.1111/j.1740-9713.2016.00960.x.Google Scholar
Mateescu, A (2024) New technologies require thinking more expansively about protecting workers. Tech Policy Press, March 6, 2024. Available at https://www.techpolicy.press/new-technologies-require-thinking-more-expansively-about-protecting-workers/.Google Scholar
Mathewson, TG (2023) Dropout risk system under scrutiny after The Markup report. The Markup, September 23, 2023. Available at https://themarkup.org/hello-world/2023/09/23/dropout-risk-system-under-scrutiny-after-the-markup-report.Google Scholar
Mazereeuw, P (2025) Legislation off life support. The Hill Times, January 7, 2025. Available at https://www.hilltimes.com/story/2025/01/07/legislation-off-life-support/446772/.Google Scholar
McCrory, L (2024) A feminist framework for urban AI governance: Addressing challenges for public–private partnerships. Data & Policy 6, e79.Google Scholar
McQuillan, D (2022) Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. Bristol: Bristol Policy Press.Google Scholar
Mejaski, C (2011) Promoting CanCon in the age of new media. Masters Thesis. Ryerson University - York University, Toronto, Canada.Google Scholar
Noble, SU (2018) Algorithms of oppression: How search engines reinforce racism. In Algorithms of Oppression. New York: New York University Press.CrossRefGoogle Scholar
Office of the Minister of Innovation Science and Industry Canada (2023) Standing Committee on Industry and Technology. Available at https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12600809/12600809/MinisterOfInnovationScienceAndIndustry-2023-10-03-e.pdf.Google Scholar
OpenMedia (2023) Advocates demand proper consideration for AI regulation. OpenMedia, September 25, 2023. Available at https://openmedia.org/press/item/advocates-demand-proper-consideration-for-ai-regulation.Google Scholar
Osman, L and Reevely, D (2025) Justin Trudeau says he’ll resign but not before dealing with new Trump administration. The Logic, January 6, 2025. Available at https://thelogic.co/news/justin-trudeau-resignation-trump-administration/.Google Scholar
Parks, L (2019) Dirty data: Content moderation, regulatory outsourcing, and the cleaners. Film Quarterly 73(1), 1118.Google Scholar
Parliament of Canada (2022) Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts. Available at https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading.Google Scholar
Perrigo, B (2022) Inside Facebook’s African sweatshop. Time, February 14, 2022. Available at https://time.com/6147458/facebook-africa-content-moderation-employee-treatment.Google Scholar
Perrigo, B (2023a) OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time, January 18, 2023. Available at https://time.com/6247678/openai-chatgpt-kenya-workers.Google Scholar
Perrigo, B (2023b) Former TikTok moderator threatens lawsuit in Kenya over alleged trauma and unfair dismissal. TIME, July 10, 2023. Available at https://time.com/6293271/tiktok-bytedance-kenya-moderator-lawsuit/.Google Scholar
Raji, ID (2023) AI’s present matters more than its imagined future. The Atlantic, October 4, 2023. Available at https://www.theatlantic.com/technology/archive/2023/10/ai-chuck-schumer-forum-legislation/675540.Google Scholar
Roberts, ST (2019) Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press.Google Scholar
Sambuli, N (2021) Five challenges with multistakeholder initiatives on AI. Carnegie Council for Ethics in International Affairs, September 15, 2021. Available at https://www.carnegiecouncil.org/media/article/five-challenges-with-multistakeholder-initiatives-on-ai.Google Scholar
Sambuli, N (2022) Facebook lawsuit in Kenya could affect Big Tech accountability across Africa. OpenDemocracy, August 12, 2022. Available at https://www.opendemocracy.net/en/5050/facebook-meta-sama-daniel-motaung-court-kenya.Google Scholar
Scassa, T (2023) Canada’s proposed AI & Data Act - Purpose and application. Personal Blog, August 8, 2022. Available at https://www.teresascassa.ca/index.php?option=com_k2&view=item&id=362:canadas-proposed-ai--data-act-purpose-and-application&Itemid=80.Google Scholar
Senate of Canada (2021) Rising to the Challenge of New Global Realities: Forging a New Path for Sustainable, Inclusive and Shared Prosperity in Canada. Senate Prosperity Action Group. Available at https://peterharder.sencanada.ca/media/drknqpe2/pag-report-english.pdf.Google Scholar
Shade, LR (2019) Editorial: Canadian digital and data strategy. Canadian Journal of Communication 44(2), 2326.Google Scholar
Sieber, RE, Brandusescu, A, Adu-Daako, A and Sangiambut, S (2024a) Who are the publics engaging in AI? Public Understanding of Science 33, 634653. https://doi.org/10.1177/09636625231219853CrossRefGoogle Scholar
Sieber, RE, Brandusescu, A, Sangiambut, S and Adu-Daako, A (2024b) What is civic participation in artificial intelligence? Environment and Planning B: Urban Analytics and City Science, 23998083241296200. Available at https://doi.org/10.1177/2399808324129620.Google Scholar
Standing Committee on Industry and Technology Canada (2024) Notice of meeting: Study of Bill C-27, the Digital Charter Implementation Act, 2022. Parliament of Canada. Available at https://www.ourcommons.ca/DocumentViewer/en/44-1/INDU/meeting-111/notice.Google Scholar
Surveillance Study Centre (2022) Beyond big data surveillance - freedom and fairness: A report for all Canadian citizens. Queen’s University. Available at https://www.surveillance-studies.ca/sites/sscqueens.org/files/bds_report_eng-2022-05-17.pdf.Google Scholar
Taddeo, M and Floridi, L (2021) Regulate artificial intelligence to avert cyber arms race. In Ethics, Governance, and Policies in Artificial Intelligence. Cham: Springer International Publishing, pp. 283287.CrossRefGoogle Scholar
Tessono, C and Solomun, S (2022) How to fix Canada’s proposed Artificial Intelligence Act. Tech Policy Press, December 6, 2022. Available at https://techpolicy.press/how-to-fix-canadas-proposed-artificial-intelligence-act.Google Scholar
Tessono, C, Stevens, Y, Malik, MM, Solomun, S, Dwivedi, S and Andrey, S (2022) AI oversight, accountability and protecting human rights: Comments on Canada’s proposed Artificial Intelligence and Data Act. Submission to the Standing Committee on Industry and Technology on Bill C-27, November 2022. Available at https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12444167/br-external/CenterForInformationTechnologyPolicy-e.pdf.Google Scholar
The Bureau of Investigative Journalism (2022) Behind TikTok’s boom: A legion of traumatised, $10-a-day content moderators. The Bureau of Investigative Journalism, October 20, 2022. Available at https://www.thebureauinvestigates.com/stories/2022-10-20/behind-tiktoks-boom-a-legion-of-traumatised-10-a-day-content-moderators.Google Scholar
Treasury Board of Canada Secretariat (2019) Directive on Automated Decision-Making. Available at https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592.Google Scholar
UK Office for Artificial Intelligence (2023) A pro-innovation approach to AI regulation. Command Paper Number: 815. Available at https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.Google Scholar
U.S. White House (2023) Biden-Harris Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI. Fact Sheet. Available at https://web.archive.org/web/20231215180934/; https://www.whitehouse.gov/briefing-room/statements-releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.Google Scholar
U.S. Nuclear Regulatory Commission (2024) Considerations for developing artificial intelligence systems in nuclear applications. A co-authored report by the Canadian Nuclear Safety Commission UK Office for Nuclear Regulation US Nuclear Regulatory Commission, September 2024. Available at https://www.nrc.gov/docs/ML2424/ML24241A252.pdf.Google Scholar
Vipra, J and Korinek, A (2023) Market concentration implications of foundation models: The invisible hand of ChatGPT. Brookings Institution, September 7, 2023. Available at https://www.brookings.edu/articles/market-concentration-implications-of-foundation-models-the-invisible-hand-of-chatgpt.Google Scholar
Walker, JS and Wellock, TR (2010) A short history of nuclear regulation, 1946-2009. U.S. Nuclear Regulatory Commission, Washington, D.C.Google Scholar
Walker, S and Alonso, J (2016) Open government, and the 4th Industrial Revolution. Open Canada, January 26, 2016. Available at https://web.archive.org/web/20230326070439/https://open.canada.ca/en/blog/open-government-and-4th-industrial-revolution.Google Scholar
Wei, K, Ezell, C, Gabrieli, N and Deshpande, C (2024) How do AI companies “fine-tune” policy? Examining regulatory capture in AI governance. In Proceedings of the AAAI/ACM Conference on AI. Ethics, and Society. ACM, Vol. 7, pp. 15391555.Google Scholar
Wheeler, T (2023) The three challenges of AI regulation. Tech Talk. Brookings Institution, June 15, 2023. Available at https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation.Google Scholar
Whittaker, M (2021) The steep cost of capture. Interactions 28(6), 5055. https://doi.org/10.1145/3488666.CrossRefGoogle Scholar
Widder, DG, Meyers West, S and Whittaker, M (2024) Why ‘open’ AI systems are actually closed, and why this matters. Nature 635, 827833. https://www.nature.com/articles/s41586-024-08141-1CrossRefGoogle ScholarPubMed
Williams, A, Miceli, M and Gebru, T (2022) The exploited labor behind artificial intelligence. Noema Magazine, October 13, 2022. Available at https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence.Google Scholar
Witzel, M (2022) A few questions about Canada’s Artificial Intelligence and Data Act. Centre for International Governance Innovation, August 11, 2022. Available at https://www.cigionline.org/articles/a-few-questions-about-canadas-artificial-intelligence-and-data-act.Google Scholar
Wylie, B (2023) We’re in an AI hype cycle—can Canada make it a responsible one? The Monitor. Canadian Centre for Policy Alternatives, July 20, 2023. Available at https://monitormag.ca/articles/were-in-an-ai-hype-cycle-can-canada-make-it-a-responsible-one.Google Scholar
Zuboff, S (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs.Google Scholar
Figure 0

Figure 1. ISED’s ministerial engagements with Canadians for the Digital Charter.

Submit a response

Comments

No Comments have been published for this article.