The AI Act: An incomplete Framework
By Hugh Mansfield, President, Bizcom
A relevant essay (with updated Editorial notes from 2024)

The ascension of Artificial Intelligence (AI) has swept the global economy and forced governments to invoke regulatory standards to help foster technological innovation while safeguarding individual rights. The speed with which AI continues to move has led to the first comprehensive Artificial Intelligence (AI) Legislation, the EU AI Act. Through the first detailed global framework of regulations, the AI Act will significantly change the way companies operate in the EU and related global markets. This paper will examine some of the deficiencies within the AI Act, namely the lack of compliance standards that meet the needs of developers and individual rights, the absence of clear, coordinated enforcement mechanisms, and the influence of political agendas.
Artificial Intelligence (AI) has existed for more than 50 years. However, recent increases in computational power and the abundance of global data have resulted in numerous advances in AI and Machine Learning (ML). AI is ubiquitous in both our economy and in society. From facial recognition for unlocking an iPhone to purchasing recognition patterns on Amazon to recommender algorithms in Netflix to traffic routing optimization for UPS, and the measurement and balancing of soil nutrients in vertical farming, the world is on the precipice of an AI/ML transformational journey.
In The Beginning
In November 2022, OpenAI launched ChatGPT. It was the first public Large Language Model (LLM) platform that allowed users to interact in an environment where humans interact with computers that present humanlike solutions to everyday situations in seconds flat. Other LLMs, Google’s Gemini (formerly Bard) and LaMDA, Meta’s LlaMa, Microsoft’s Orca, and Salesforce’s Sales GPT, to name a few, quickly followed. ChatGPT’s popularity reached 100 million users in the first two months of availability, a new modern-day enrollment record, indicating generative AI is not going away anytime soon. While AI was already under scrutiny and was set to face tighter government regulations worldwide, the LLMs were the ‘gasoline on the fire’ that prompted late additions to the impending regulatory changes.
The AI Act
The AI Act is largely based on OECD AI principles, which promote AI in a cooperative environment between governments, the private sector and the public to create a human-centric, trustworthy AI. The AI Act is an attempt to balance innovation with the fundamental rights of individuals, namely the protection of personal data, privacy, and non-discrimination, especially as it relates to employment, creditworthiness, and individual freedoms. The underlying hope of the AI Act is that, as with the GDPR, other global governments will find ways to provide complementary frameworks that largely work in unison. However, can this be accomplished in an environment of technological speed versus the slower pace of legislative processes worldwide and the lack of qualified technical expertise? The confusion created by the lack of a solid framework presents many issues, of which, at its core, trustworthiness will be the hardest to convey. AI is not a static platform or product that, like other industries. The lack of detail in the current AI Act will undoubtedly result in a highly subjective framework. Kai Zenner, Head of Office and Digital Policy Advisor to Axel Voss, MEP and lead political architect of the AI Act, stated, “Right now, it is a rather chaotic system.” (Holistic AI interview, March 2024)
The AI Act is based on a systematic tiered risk matrix that requires organizations to self-impose governance structures to remain compliant with the guidelines and standards. All systems must have some level of human oversight congruent with the level of risk. They are expected to be harmonized by the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENLEC). Once work begins, broader global participation and cooperation will likely be seen from the International Organization for Standardization (ISO/IEC-42001) and the Institute of Electrical and Electronics Engineers (IEEE). The ISO 42001 Standards, published in December of 2023, provide an AI Management system (AIMS) that instructs organizations on measures required to create a “Trustworthy” environment for their AI/ML systems. Detailed understanding of product specifications, ethical standards, encryption security, and privacy by design are among the performance standards that will be incorporated, along with user manuals and functional safety practices, and made available to all developers and included in their downstream operations. The first set of standards is scheduled to be unveiled in the summer of 2024, but is likely to be light on substantive details that effectively address all considerations related to the guiding principles. (Ed Note: the standards are still incomplete, which may result in further AI Act delay announcements on November 19th)
One anticipated area missing from the standards is risk management and its impacts on fundamental rights, such as personal data, privacy, and anti-discrimination (Pouget, p. 4). Currently, the EU Charter of Fundamental Rights includes 50 Articles that have neither been mentioned nor addressed in terms of documentation, consequences, or as an assessment criterion for developers to follow. Both the European Consumer Voice in Standardization (ANEC) and European Digital Rights (EDRi) have argued “…decisions, with potentially severe implications on people’s rights and freedoms, need to remain within the remit of the democratic process, supported by experts in areas such as equality and non-discrimination as well as the socio-technical impacts of technology and data”(Pouget, 2024, p.4) Currently, both CEN and CENLEC are reluctant to address the creation of such standards due to a lack of technical expertise and application to the impending legislation. As Zenner notes, CEN/CENELEC “will need to continuously adjust their program again, and again, and again.” (Holistic AI interview, March 2024).
As technical standards were to be developed, the European Commission initiated a self-assessment program called the “AI Pact”. The purpose of the AI Pact was to create an environment in which organizations impacted by the AI Act would engage in a self-governance structure and voluntarily communicate their practices, procedures, and ongoing commitment to compliance. Google, originally the only large tech giant involved in this initiative, volunteered as an early supporter and participant in drafting the code, which began in May 2023. Margaret Vestager, EU Commissioner for Competition, stated, “…we need voluntary agreement on universal rules for AI now.” (Chee, 2023, p. 2). In November of 2023, the EU launched a “call for interest” for companies willing to actively engage in the AI Pact to collect early ideas and best practices for future pledges. The EU has announced that following the formal launch of the AI Act, the AI Pact will be announced with “frontrunner” organizations invited to make their pledges in public. While the public compliance commitment to the early language of the AI Act framework design is admirable, there is no legally binding mechanism to enforce the pledge, and this seems to be a politically motivated effort to demonstrate commitment by large tech to display trust and create greater visibility within the public eye.
The AI Act demands transparency in design development, also known as explainability, and remains foundational to the AI Act. However, without clear standards, it is unclear how these are to be applied or what measurements are needed at each level of the risk matrix. The matrix is deliberately open-ended in its efforts to promote innovation, especially within SMEs. It asks the system provider to determine which risk level their system falls under and the measures required to meet the prescribed requirements. It also suggests that organizations create a framework for risk assessment that incorporates human oversight of all AI systems within the organization, both those that exist today and those to be developed soon. When called upon, organizations will be required to demonstrate documented compliance of their model with the associated level of risk. The current vagaries and loose interpretations create more significant uncertainty, risk aversion and, ultimately, non-compliance, especially as it relates to SME’s and entrepreneurs, the same audience the AI Act is attempting to nurture and spawn further innovation. Anna Makanju, Vice President of Global Affairs at OpenAI, explained, “At the end of the day, the companies that are building these models know the most about them. They have the most granular understanding of all the different aspects. And I don’t think anyone should trust companies to self-regulate.” (Elliott, 2024, p.7.) A chilling remark from one of the global AI powerhouses and perhaps a harbinger of stalled innovation within the EU, or more problematically, exponential increases in violations due to the lack of definitive direction within the AI Act.
The Risk Matrix addresses four levels of risk and primarily focuses on high-risk AI systems. The Commission is still working on the final version of the matrix guidelines, which are expected to be made public no later than the end of April 2024.
Unacceptable Risk
The highest level in the risk matrix is Unacceptable Risk – prohibiting high-risk AI systems such as social scoring by governments – an evaluation of an individual’s trustworthiness or creditworthiness based on multiple data points, time, and occurrences found on social media or through online purchases, which result in an individual’s score. The benefits are more significant for those with high scores than for those without.
Unacceptable Risk through amendments under Annex II also includes predictive policing, whereby individuals’ profiles are assessed to predict their future occurrence of a crime, as currently written, is prohibited. Some Member States would like to see this open to future discussions and moved to the High-Risk category. Unacceptable Risk also bans remote biometric systems such as CCTV cameras in public areas except for use in exceptional circumstances by law enforcement to prosecute serious crimes.
Recently, it has become apparent that prior projects involving significant biometric measurements may have been deliberately overlooked. The EU Observer reports that an unnamed civil servant claimed that both law enforcement and migration control authorities still have lie-detector technology at their disposal (Borak, 2024). The technology was piloted in Hungary, Greece, and Latvia to support the EU-funded Intelligent Portable Border Control System (iBorderCTRL) project launched in 2018. In an appeal of the European Research Executive Agency refusing “access to certain Information” (Feb 2022), Patrick Breyer launched an appeal in pursuit of public interests to the Tenth Chamber of Justice of the European Union (CJEU). The CJEU dismissed the appeal, citing that documentation related to the pilot project was not available to the public due to the sensitivities of the information and protection of commercial interests (CJEU, Case C-135/22 P, Sept. 2023). Despite the AI Act’s reservations about biometrics and AI’s ability to measure effectively without random bias and discrimination, emotion recognition (e.g., lie detectors) was not included in Article 5 but deemed “High-Risk” and prohibited in the workplace and educational institutions except for safety and medical reasons. Furthermore, deployers may use future emotional recognition systems without consent for the detection of criminal offences. This divergence from the biometric standards set out in the matrix is concerning and suggests that political agendas, driven by the upcoming European Parliamentary June elections, may be prioritized to appease popular support for tighter migration and border controls.
High Risk
The second level of the Risk Matrix is High Risk, which, is more expansive than unacceptable risk, it coverspre- and post-deployment measures for AI systems, especially in education, employment, law enforcement, justice administration, infrastructure management and immigration. Systems considered high-risk are subject to ongoing obligations during the lifespan of the specific project. Prior to launch, systems must be thoroughly scrutinized to ensure compliance with conformity standards, and all systems must be registered in the EU database. Once the system is operational, high-risk systems will be subject to ongoing audits and monitoring; risk management, training, data governance, and technical records will also need to be in place and continuously adjusted. Zenner believes High-Risk will apply to 30-40 per cent of the EU companies, and GPAI will be the largest area within the matrix at approximately 50 per cent. (Holistic AI interview, March 2024)
Low Risk
The third and fourth levels of the Risk Matrix are deemed Low and Minimal Risk. It encompasses AI foundational generative AI systems such as ChatGPT and Gemini, which currently have limited restrictions and special acknowledgement through lobbying efforts to maintain foundational models as self-regulating, led by Germany, France, and Italy. Their belief was that these large models should bear the costs and scrutiny of third-party conformity assessments on a risk basis. However, it is suggested that some form of human oversight be in place to ensure fairness and ethical behaviours are upheld.
Throughout the AI Act, there were intense negotiations over general-purpose AI systems (GPAIs), also known as foundational models, as they uniquely span so many verticals, encompass a broad range of tasks, and impact downstream applications. From generating code to analyzing medical images or creating databases for employee training, these generative AI systems require distinct positioning below Low and Minimal risk on the matrix.
GPAI Models
GPAI models must keep technical documentation current and make it available to the newly created AI Office and national regulators. They must also keep records of downstream providers that integrate the model into their AI systems. Anything generated needs to be traceable. For example, if Acme Consulting Firm created a highly customized ChatGPT LLM platform to streamline a supply chain management program for Company B, Acme would require documentation acknowledging the model used, the evaluations conducted, and the cybersecurity protections, among others. Acme will also be required to provide a detailed account of the data collections and data sets used in the model’s training. Under the AI Act, GPAI models are also needed to ensure compliance with EU Copyright law and to find an equitable revenue-sharing solution, while providing copyright holders with an opt-out before any data scraping exercise. This process of watermarking, which makes it clear what has been generated by AI, is essential, especially as it relates to deepfake content creation and understanding how they are created. Zenner is again skeptical of the language currently being used to describe watermarking, suggesting “it is at best vague”. (Holistic AI interview, March 2024).
FLOPs
A further obligation applies to GPAI models with “Systemic risk,” which can be triggered if the floating-point operations (FLOPs) exceed 10^25, if the EU determines the cumulative parameters, including the input and output modalities, or if the number of business users is excessive. Additional obligations include extensive model performance testing, ongoing risk assessments, comprehensive risk mitigation measures, and ensuring that robust cybersecurity protocols are in place. (Ed Note: As of June 2025, there are over 30 publicly known models that exceed the 10^25 compute threshold. (Epoch)
Under the AI Act, coordinated regulatory sandboxes will be established, allowing companies to test their models and ideas in a controlled environment under the supervision of the AI Office. The benefit is intended to expedite AI innovations to market, subject to approval by the AI Office. How the mechanics and funding of the sandbox are to be operationalized remains very unclear, but notionally it remains a good idea, especially for the SME and entrepreneurial communities. Axel Voss, MEP, supports innovation and opportunities through the regulatory sandboxes and forthcoming technical standards and believes these will invariably support SMEs looking to develop models with regulatory oversight. (Voss, March 13, 2024, p.1) However, Voss is guarded in his support and stated, “We have serious doubts if the product safety approach is conceptually capable of regulating evolving technology.” (Voss, March 13, 2024, p.1) Voss is concerned that the governance system is overly complicated and is on a similar twisted pathway that took place during the development of the GDPR. Voss adds, “Our AI developers will often not know how to comply with the AI Act and who to turn to if they face problems.” (Voss March 13, 2024, p.1).
Mistral, a mid-size AI company based in France, has significantly contributed to the technology protections built into the AI Act for European SMEs. Mistral was repeatedly acknowledged as the “…European champion that could compete against US giants like Chat GPT creator OpenAI with an open-source product…” (Laprince-Ringuet, 2024, p.1) Immediately after the final draft of the AI Act, Mistral entered into a deal with Microsoft, a stark reminder to the EU that, no matter how hard one tries to regulate Big Tech, it is a limited ambition in a fast-moving technological revolution and a context of monopolistic dominance. Yann Lechelle, founder of open-source company Probabl stated, “This partnership between Microsoft is a major and logical opportunity. Unfortunately, it shows that the distribution of any major SaaS offering, AI or otherwise, depends on and reinforces already dominating hyperscalers” (Laprince-Ringuet, 2024, p.5).
The AI Act will receive final approval from the European Parliament on April 10th or 11th of 2024 and will be in effect 20 days thereafter. It will apply to the following risk groups, with assigned periods of grace before full enforcement: Prohibited Systems (6 months), GPAI (12 months), AI systems under Annex III (24 months), and AI systems under Annex II (36 months). Codes of Practice for all risk groups must be in place within 9 months of entry into force. (Ed Note: The AI Act was approved and came into force August 2, 2024).
Under the AI Act, the AI Office will be established to foster new standards and monitor existing practices, and will reside within the European Commission. The Office will report to the European AI Board will be created with representation from member states and chaired by the Commission. The board’s primary role is to issue opinions and recommendations on the AI Act and to enforce penalties through coordination with member states.
Associated Monetary Penalties (AMPs)
The AI Act has a strict regime of Associated Monetary Penalties (AMPS) ranging from 7.5 million euros or 1.5% of global revenues to 35 million euros or 7% of global revenues, depending on the violation and the size of the company. The Commission will be required to set up an AI Office and is currently mandated to begin staffing that operation (Note: currently, the suggested start dates for some of the job postings are “Autumn 2024” and on one-year contracts, expandable to six years) While the penalty ranges have been established, the enforcement remains to be a weak link for several different reasons.
The lack of a coordinated enforcement framework is highly problematic. It needs to clearly state cooperation with member states where existing cybersecurity/data protection authorities, enforcement laws and practices already exist. For example, the Commission may introduce new compliance rules for member states, with Article 5 (Unacceptable Risk) prohibitions as the top priority within 60 days of notice of violations. Member states can bring Article 5 offenders (within 30 days) to the Commission. If the Commission determines that a member state’s violation has an impact across all 27 member states, it may issue a blanket recommendation across the entire EU. As noted in Article 66, “If the measure taken by the relevant Member States is considered justified by the Commission, all Member States shall ensure that appropriate restrictive measures are taken in respect of the AI system concerned, such as withdrawal of the AI system from their market without undue delay and shall inform the Commission accordingly.” (EU AI Act Article 66, Sec 2). The difficulty is that a violation and remedial actions in one Member State may not be consistent with the enforcement rules in another Member State. What may be deemed of National Importance and applicable across all Member States may be more challenging to invoke in the current absence of detailed precedents and review processes. Additionally, other governing bodies with AI responsibilities, such as the European Union Agency for Cybersecurity (ENISA), the European Central Bank (ECB) and others, still need to be acknowledged within the ACT and considered when violations and notifications occur. The lack of consistency will invariably lead to additional audits, legal challenges, and other escalations in compliance costs.
Doubts Still Exist
At the final plenary session of the European Parliament on March 13th, 2024, the EU Parliament voted in favour of adopting the AI Act. Over three years in the drafting process still leaves considerable doubt, as expressed by some participants. Similarly, the European Data Protection Supervisor (EDPS) issued a statement acknowledging their support for the AI Act but with numerous reservations. Suggesting the AI Act “as it stands now, could prove to be a missed opportunity.” (EDPS, March 13, 2024). EDPS is concerned with the generalities used in the enforcement language of the Act and suggests “their largely declarative nature would inevitably lead to a divergent application of the convention, thus undermining legal certainty…” (EDPS March 13, 2024, p.1). It is very concerned about the ambiguities of the Act and “certain AI uses that could severely undermine human dignity and individual autonomy, human rights, democracy and the rule of law.” (EDPS March 13, 2024, p.2) Kai Zenner stated, “There is no time for celebration, especially since the AI Act is far from perfect. What the EU needs right now is a collective effort of all involved public and private actors.” (Zenner, 2024, p.7
The AI Act is a regulatory framework without precedent. Over the next 24-36 months of the implementation of the AI Act, a consolidated effort between the public and private sectors, the CEN and CENELEC along with the 27 member states of the EU will be required to complete a more definitive compliance framework that offers greater clarity for all participants. The current shortcomings in details can be largely attributed to a politically motivated agenda to have the Act in place prior to EU Parliamentary elections in June of 2024. Finding the balance between protecting citizens while encouraging innovation may prove to be the most challenging issue as the speed within which AI is moving may keep regulators one step behind for a very long time.
Citations (83 words)
References (534 words)
- Bertuzzi, Luca, March 13, 2024, The Final Plenary Meeting…; retrieved from LinkedIn on March 13, 2024
- Borak, Masha, February 16, 2024, Is the EU AI Act leaving a backdoor for emotion recognition?; Retrieved on March 11, 2024 from Biometric Update.com website, Is the EU AI Act leaving a backdoor for emotion recognition? | Biometric Update
- Chander, Sarah & Jakubowska, Ella, January 31, 2024, Council to vote on EU AI Act: What’s at stake?; Retrieved on March 4th, 2024 from the European Digital Rights (EDRI) website, https://edri.org/our-work/council-to-vote-on-eu-ai-act-whats-at-stake/#:~:text=Despite%20making%20very%20few%20compromises,relating%20to%20law%20enforcement%20agencies.
- Chee, Foo Yun, May 24, 2023, EU, Google to develop voluntary AI pact ahead of new AI rules, EU’s Breton says, Retrieved March 8, 2024 from Reuters website, https://www.reuters.com/technology/eu-google-develop-voluntary-ai-pact-ahead-new-ai-rules-eus-breton-says-2023-05-24/#:~:text=EU%20Commissioner%20for%20Competition%20Margrethe,she%20said%20in%20a%20tweet.
- CJEU Appeal; Bryer, Patrick vs. European Research Executive Agency (REA) Case C-135/22 P September 7, 2023, CURIA – Documents (europa.eu)
- Csernatoni, Raluca, March 6, 2024, Charting the Geopolitics and European Governance of Artificial Intelligence; Retrieved on March 6, 2024 from Carnegie Europe website, https://carnegieeurope.eu/2024/03/06/charting-geopolitics-and-european-governance-of-artificial-intelligence-pub-91876
- Elliott, David, March 1, 2024, AI: Will governance catch up with the tech in 2024?; Retrieved on March 8, 2024 from the World Economic Forum website, https://www.weforum.org/agenda/authors/david-elliott/
- European Commission AI Pact, March 6, 2024, AI Pact; Retrieved on March 8, 2024 from EU Commission website, https://digital-strategy.ec.europa.eu/en/policies/ai-pact
- European Data Protection Supervisor (EDPS) press release, EDPS Statement in view of the 10th and last Plenary Meeting of the Committee on Artificial Intelligence (CAI) of the Council of Europe drafting the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law; Retrieved from EDPS website, https://www.edps.europa.eu/press-publications/press-news/press-releases/2024/edps-statement-view-10th-and-last-plenary-meeting-committee-artificial-intelligence-cai-council-europe-drafting-framework-convention-artificial_en#:~:text=The%20EDPS%20is%20convinced%20that,for%20the%20developers%2C%20providers%20and
- The EU Artificial Intelligence Act, January 21, 2024, The AI Act Explorer | EU Artificial Intelligence Act
- Gentile, Heather, February 8, 2024, Preparing for the EU AI Act: Getting Governance Right; Retrieved on March 6, 2024 from IBM Website, https://www.ibm.com/blog/eu-ai-act/
- Kellerhals,Thierry, February 13, 2024, What’s the risk of not having a clean AI Governance in Place?; Retrieved on March 6, 2024 from the KPMG website, https://kpmg.com/ch/en/blogs/home/posts/2024/02/whats-the-risk-of-not-having-a-clean-ai-governance-in-place.html
- Leprince-Ringuet, Daphne, February 28, 2024, Mistral’s deal with Microsoft is causing controversy; Retrieved on March 6, 2024 from Sifted EU website, https://sifted.eu/articles/mistral-microsoft-deal-controversy
- Pouget, Hadrien & Zuhdi, Ranj, March 4, 2024, AI and Product Safety Standards Under the EU AI Act; retrieved on March 5, 2024 from Carnegie Endowment For International Peace website, https://carnegieendowment.org/2024/03/05/ai-and-product-safety-standards-under-eu-ai-act-pub-91870#:~:text=However%2C%20AI%20standards%20remain%20incomplete,the%20act%20will%20promote%20innovation
- Ryan-Mosley, Tate, December 11, 2023, Why the EU AI Act was so hard to agree on; retrieved on March 4, 2024 from MIT Technology Review website, https://www.technologyreview.com/2023/12/11/1084849/why-the-eu-ai-act-was-so-hard-to-agree-on/
- Thomas, Riya, March 6, 2024, EU AI Act: Charting a new course for AI governance and research; Retrieved on Mrach 6, 2024 from enago academy website, https://www.enago.com/academy/impact-of-eu-ai-act-on-research/
- Toews, Rob, March 10, 2024, AI Predictions for the Year 2030; Retrieved on March 10, 2024 from Forbes website, https://www.forbes.com/sites/robtoews/2024/03/10/10-ai-predictions-for-the-year-2030/?sh=1e03266e40bc
- Voss, Axel, March 13, 2024, After almost Three years of political negotiations…; Retrieved on March 13th from Axel Voss LinkedIn; https://www.linkedin.com/in/axel-voss-a1744969/?locale=en_US
- Zenner, Kai, February 15, 2024, Some personal reflections on the EU AI Act: a bittersweet ending; Retrieved on March 4, 2024 from Zenner’s LinkedIn Page, https://www.linkedin.com/pulse/some-personal-reflections-eu-ai-act-bittersweet-ending-kai-zenner-avgee/
- Holistic AI, March20, 2024 Implementing the EU AI Act: Text to Practice Holistic AI Interview with Kai Jenner.