Big Tech Regulatory Tipping Point—Canada Should Take Note

The EU’s AI Act, a legal framework introduced in August 2024, that the EU Commission suggests “is based on human rights and fundamental values…that benefits everyone.” It identifies different levels of risk and potential harm to EU citizens from AI. It obligates both model developers and deployers conducting business in the EU (whether physically located within the EU or not) to operate in an ethical and responsible manner. Failing to do so could expose your organization to fines of up to Є35,000,000 or 7% of global revenues. The Act has teeth and clarity, which most Europeans and businesses (excluding Big Tech) support. The EU remains resolute in its pursuit of AI regulations – Canada should follow suit.
What is next for AI
After two failed attempts at introducing updated privacy and AI regulations, Bill C-27 and prior Bill C-11(PIPEDA and AIDA, which died twice on the order paper due to the collapse of two minority governments), Canada remains without legislation that provides regulation and guidelines for those developers and deployers of AI and the population at large. Embracing regulation is a necessary condition for innovation, rather than a deterrent to it. Having clearly defined guidelines removes uncertainty and allows greater clarity, security and accountability for all.
As the world’s third-largest purchasing power and recently announced strategic partner of Canada, the EU is setting the pace. On July 10, 2025, the voluntary GPAI Code, which fulfills Articles 53 & 55 of the AI Act (Transparency, Copyright and Safety & Security). The Code imposes legal obligations on model providers such as OpenAI’s ChatGPT and was developed with input from nearly 1,000 academics, model providers, AI Safety experts, SMEs, and civil society organizations. It began in September 2024 and was finalized by 13 independent experts, with Canadian Yoshua Bengio, an outspoken advocate for greater regulation of AI and considered one of the foremost experts in the world on AI, chairing Working Group 3 on the chapter on Safety and Security.
Big Tech Crossroads
Big Tech claims the AI Act violates the EU’s copyright laws and stifles innovation, further suggesting the AI Act, the Digital Services Act, and Digital Services Tax (the latter DST currently enforced in 12 of the 27 EU member states, which Canada recently acquiesced to) are nothing more than unfair tariffs masquerading under a different name. It will be interesting to see the details of the recently announced 15% tariff agreement between the EU and US to see if the DST is mentioned.
Primarily originating in the US, it is an interesting crossroads for Big Tech. Will they agree to sign the voluntary Code in another jurisdiction of market importance? So far, only Anthropic and OpenAI have agreed to sign, both companies suggesting it is the gateway for further changes in April’s announced EU AI Continent Action Plan. Meta has stated that it will not sign the document while Google, Microsoft, and others review it, and will likely announce its decisions and direction in the coming days.
Anu Bradford, Professor of Law and International Organizations at Columbia Law School, suggests we are at a global inflection point, and we can either empower liberal democracies or reinforce authoritarian and corporate dominance. In her book Digital Empires, Bradford states, “Important choices lay ahead that will shape the future ethos of the digital society”.
The tolerance for Big Tech and its demands for less regulation is losing ground. Tom Wheeler, former Chairman of the Federal Communications Commission (2013-2017) and author of Tech-Lash, argues that Big Tech has outgrown the existing oversight structures, enabling them to act as “Pseudo-governments” with incredible power over data, markets, and public disclosure in an unsupervised environment. Wheeler suggests public distrust in Big Tech is on the rise, and there is an urgency in this moment.
Global Regulations Growing
Recent events have sent a clear signal globally: regulation of AI is essential and should not be cast aside for fear of stifling innovation or losing the AI race. Texas passed rigorous AI legislation in June 2025, adding to the numerous states that already have AI rules (over 800 pieces of AI-related legislation were acted upon at the state government level in 2024 alone). The US Senate voted down the ten-year moratorium on AI regulations in the United States (votes cast were 99-1 in favour of removing it from President Trump’s Big Beautiful Bill), which has triggered multiple employment- and children protection-related AI decisions in California, Colorado, New York and Connecticut. There will be more.
On July 26, China unveiled its “AI+” plan at its annual state-organized World AI Conference, in which it endorsed a global AI cooperation organization. The plan calls for “adhering to overall development and security, strengthening the docking and coordination amongst countries and promoting the early formation of a global governance framework and rules for AI with broad consensus.” The plan also acknowledges the disparity between have and have-not nations and the necessity to share new technologies for the benefit of all, indeed, a very different approach from the unregulated US.
Canada cannot afford to fall behind. Ottawa must seize this moment to establish a world-class regulatory framework that protects citizens and provides businesses with clarity and certainty. Aligning our AI rules with those of Europe would ensure ethical, transparent, and accountable practices, making Canada a credible global player in AI innovation.
Rules provide more than guardrails—they provide trust. Canada can either lead in responsible AI governance or watch from the sidelines as others shape the rules of the game.