qtq80-g8eMKh

The AI Act: An Incomplete Framework

The ascension of Artificial Intelligence (AI) has swept the global economy and forced governments to invoke regulatory standards to help foster technological innovation while safeguarding individual rights. The speed with which AI continues to move has led to the first comprehensive Artificial Intelligence (AI) Legislation, the EU AI Act. While laying down the first global detailed framework of regulations, the AI Act has issues related to the embryonic state of AI development and the role the Act will play on the global stage dominated by Big Tech. The ambiguities within the AI Act, combined with the lack of compliance standards and the absence of clear enforcement mechanisms, have created a great deal of confusion, which may negatively impact the legitimacy of the AI Act.

Artificial Intelligence (AI) has existed for more than 50 years. However, recent increases in computational power and the abundance of global data have resulted in numerous advances in AI and Machine Learning (ML). AI is ubiquitous in both our economy and in society. From facial recognition for opening an iPhone to purchasing recognition patterns on Amazon to recommender algorithms found in Netflix to traffic route optimization for UPS, and the measurements and balancing of soil nutrients in vertical farming, the world is on the precipice of an AI/ML transformational journey.

In November of 2022, ChatGPT was launched by OpenAI. It was the first public Large Language Model (LLM) platform that allowed users to interface in an environment whereby humans are interfacing with computers that present humanlike solutions to everyday situations in seconds flat. Other LLMs, Google’s Gemini (formerly Bard) and LaMDA, Meta’s LlaMa, Microsoft’s Orca, and Salesforce’s Sales GPT, to name a few, quickly followed. ChatGPT’s popularity reached 100 million users in the first two months of availability; a new modern-day enrollment record indicates that generative AI is not going away anytime soon. While AI was already under scrutiny and planned tighter government regulations worldwide, the LLMs were the ‘gasoline on the fire’ that made late additions to impending regulatory change.

The AI Act is an attempt to find a balance between innovation while preserving the fundamental rights of the individual, namely the protection of personal data, privacy, and non-discrimination, especially as it relates to employment, creditworthiness, and freedoms of the individual. The underlying hope of the AI Act is that, like the pathology of GDPR, other global governments will find ways to provide complementary frameworks that will largely work in unison. However, can this be accomplished in an environment of technological speed versus the slower pace of legislative processes worldwide and lack of qualified technical expertise? The ambiguities created without a solidified framework present trustworthiness issues as AI is not a static platform or product that, like other industries, will be initially measured within a highly subjective framework. How this international code will transcend global economies will be challenging, especially for SMEs who lack finances and employee expertise in securing fundamental rights in the near term.

The AI Act is based on a systematic tiered risk matrix that requires organizations to self-impose governance structures to remain compliant. All systems must have some level of human oversight congruent with the level of risk. The current absence of more detailed standards is noticeable. It is expected to be harmonized by the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENLEC). Once work begins, broader global participation will likely be seen from the International Organization for Standardisation (ISO) and the Institute of Electrical and Electronics Engineers (IEEE). The standards must be consistent with those in the medical, drug and automotive industries to maintain credibility. Detailed understanding of product specifications, encryption security, and privacy by design are a few of the performance standards that should be incorporated along with user manuals and functional safety practices made available to downstream operations. These detailed standards are scheduled to be unveiled in the summer of 2024. However, as it currently stands, both Cen and CENLEC are reluctant to address the creation of such standards due to a lack of technical expertise and application to the impending legislation. Both the ISO and IEEE are pursuing their own set of AI codes, which may, in the absence of formal EU AI Act standards, be necessary for organizations seeking compliance conformity to adhere to those respective codes.

As technical standards are to be developed, the European Commission initiated a self-assessment program coined the “AI Pact”. The purpose of the AI Pact was to create an environment whereby organizations affected by the AI Act would engage in a governance structure and voluntarily communicate their practices and procedures and ongoing commitment to compliance. The Commission was to publish a list of companies that have pledged their allegiance to this endeavour to entrench trust and create greater visibility within the public eye. Google, the only large tech giant involved in this initiative, volunteered as an early supporter and participant in drafting the code, which began in May 2023. Margaret Vestager, EU Commissioner for Competition, stated, “…we need voluntary agreement on universal rules for AI now.” (Chee, 2023, p. 2). In November of 2023, the EU launched a “call for interest” to those companies willing to get actively involved in the AI Pact to collect early ideas and best practices towards future pledges. The EU has announced that following the formal launch of the AI Act, the AI Pact will be announced with “frontrunner” organizations invited to make their pledges in public. Anything short of the Big Five tech giants and several leading EU development companies pledging their allegiance will be awkwardly weak. While the public compliance commitment to the early language of the AI Act framework design is admirable, there is no legally binding mechanism to enforce the pledge.

The AI Act demands transparency in design development, also known as explainability, and remains foundational to the AI Act. However, without standards, it is unclear how these are to be applied and what measurement is needed at each level of the risk matrix. The matrix is deliberately open-ended in its efforts to promote innovation, especially within SMEs. It asks the system provider to judge which level of risk their system falls into and the measures required to navigate the prescribed requirements. It suggests organizations create a framework for risk assessment that incorporates human oversight of all AI systems within an organization that exists today or is to be developed soon. The intentions of creating latitude cause more headaches for SMEs as loose interpretations create more significant uncertainty, risk aversion and, ultimately, non-compliance. Anna Makanju, Vice President of Global Affairs at Open AI, explained, “At the end of the day, the companies that are building these models know the most about them. They have the most granular understanding of all the different aspects. And I don’t think anyone should trust companies to self-regulate.” (Elliott, 2024, p.7.) A chilling remark from one of the AI powerhouses and perhaps a harbinger of what is to come.

The Risk Matrix addresses four different levels of risk and is primarily focused on high-risk AI systems.

The highest level in the risk matrix is Unacceptable Risk – prohibiting high-risk AI systems such as social scoring – an evaluation of an individual’s trustworthiness or creditworthiness based upon multiple data points, time and occurrences found on social media or through online purchases, which result in defining an individual’s score. The benefits are more significant for those with high scores than those without.

Predictive policing, whereby individuals’ profiles are assessed to predict their future occurrence of a crime as currently written, is prohibited. Some Member States would like to see this open to future discussions and moved to the High-Risk category.

The scraping of facial images from the Internet collected from video surveillance footage is forbidden, although new exceptions continue to be made.

Unacceptable Risk also bans remote biometric systems such as CCTV cameras in public areas except for use in exceptional circumstances by law enforcement to prosecute serious crimes.

A new area of concern recently came to light: prior projects of significant biometric measurement that may have been overlooked. In a recent article in the EU Observer, an unnamed civil servant claimed that both law enforcement and migration control authorities still have lie-detector technology at their disposal (Borak, 2024). The technology was piloted in Hungary, Greece and Latvia to support the EU-funded Intelligent Portable Border Control System (iBorderCTRL) project launched in 2018. In an appeal of the European Research Executive Agency refusing “access to certain Information” (Feb 2022), Patrick Breyer launched an appeal in pursuit of public interests to the Tenth Chamber of Justice of the European Union (CJEU). The CJEU dismissed the appeal, citing documentation related to the pilot project was not available to the public due to the sensitivities of the information and protection of commercial interests (CJEU, Case C-135/22 P, Sept. 2023). Despite the AI Act’s reservations about biometrics and AI’s ability to measure effectively without random bias and discrimination, emotion recognition was not included in Article 5 but deemed “High-Risk” and prohibited in the workplace and educational institutions except for safety and medical reasons. Furthermore, future use of emotional recognition systems is allowed by deployers without consent for the detection of criminal offences. This is a complete divergence from biometric standards set out in the matrix, supporting a project that undoubtedly has cost millions of Euros and addresses another political need, migration and tighter border controls.

The second level of the Risk Matrix is High Risk. It coverspre- and post-deployment measures for AI systems, especially in education, employment, law enforcement, justice administration, and immigration. Systems considered high-risk are subject to ongoing obligations during the lifespan of the specific project. Prior to launch, systems must be heavily scrutinized to ensure conformity standards are met, and all systems must be registered. Once the system is operational, high-risk systems will be subject to ongoing audits and monitoring; risk management, training, data governance and technical records will also need to be in place and adjusted continuously.

The third level of the Risk Matrix is Limited Risk. It primarily targetsAI systems that can manipulate or deceive individuals and must be transparent with the humans engaged with these AI platforms. However, what level of transparency is required?

The fourth level of the Risk Matrix is Low and Minimal Risk. It encompassesAI foundational generative AI systems such as ChatGPT and Gemini, which currently have limited restrictions and special acknowledgement through lobbying efforts to maintain foundational models as self-regulating led by Germany, France, and Italy. Their belief was that these large models should bear the costs and scrutiny of third-party conformity assessments on a risk basis. However, it is suggested that some form of human oversight be in place to ensure fairness and ethical behaviour are upheld.

Throughout the AI Act, there were intense negotiations over general-purpose AI systems (GPAIs), also known as foundational models, as they uniquely cross so many verticals with a broad number of tasks and impact downstream applications. From generating code to analyzing medical images or creating databases for employee training, these generative AI systems required a distinct carve-out on the risk matrix.

As such, there are dedicated rules when dealing with such models. GPAI models are required to keep technical documentation current and that it be made available to the newly created AI Office and national regulators. They must also keep records of downstream providers integrating the model into their AI system. For example, if Acme consulting firm created a highly customized ChatGPT LLM platform to streamline a supply chain management program for company B, Acme would require documentation acknowledging the model used, evaluations conducted, and cybersecurity protections, among others. Acme will also be required to provide a detailed account of the data collections and data sets used in the model’s training. Under the AI Act, GPAI models are also required to create a model that ensures compliance with the EU Copyright law and find an equitable revenue-sharing solution while providing an opt-out to copyright holders prior to any data scraping exercise. A further obligation is applied to GPAI models with “Systemic risk” which can be applied if the floating point operations (FLOPs) exceed 10^25 or if the EU determines the cumulative parameters, which includes the input and output modalities, or number of business users is excessive. Additional obligations include extensive model performance tests, ongoing risk assessments, comprehensive risk mitigation measures, and ensuring extensive cybersecurity protocols are also in place.

Under the AI Act, coordinated regulatory sandboxes will be established whereby companies can test their models and ideas within a controlled environment under the supervision of the AI Office. The benefit is intended to help expedite innovative AI ideas to market approved by the AI Office. How the mechanics and funding of the sandbox are to be operationalized is still unknown, but notionally, it remains a good idea.

Most technology companies working in the AI vertical agree that a framework is required and beneficial to ensuring that good practices are upheld and within a prescribed set of rules. Aidan Gomez, Co-founder and CEO of the Canadian enterprise AI company Cohere, a Large Language Model automation company, was recently invited to speak at the World Economic Forum Conference on AI in Davos, Switzerland. Gomez stated, “AI is such a powerful, potent technology that I think that we have to protect the little guy. There has to be new ideas. There has to be a new generation of thinkers building and contributing. And so that needs to be a top priority for the regulators.” (Elliott, WEF 2024, p.5).

Cohere has had extensive investments from Microsoft and, most recently, Accenture, so while the cofounders were at one time those “guys in a lab” creating something innovative, they are now largely controlled by Big tech. The same can be said for Mistral, a mid-size AI company based in France that significantly contributed to the technology protections built into the AI Act for European SMEs. Mistral was repeatedly acknowledged as the “…European champion that could compete against US giants like Chat GPT creator OpenAI with an open-source product…” (Laprince-Ringuet, 2024, p.1). So, when the ink was barely dry on the final draft of the AI Act, Mistral entered a deal with Microsoft to create an LLM, Mistral Large, which is not an opensource platform, creating a certain amount of suspicion as to Mistral’s assistance in the drafting of the AI Act. Moreover, it was a stark reminder to the EU that no matter how hard one tries to regulate Big Tech, it is a limited ambition in a fast-moving technological revolution. Yann Lechelle, former CEO of French cloud computing company Scaleway and now founder of open-source Probabl stated, “This partnership between Microsoft is a major and logical opportunity. Unfortunately, it shows that the distribution of any major SaaS offering, AI or otherwise, depends and reinforces already dominating hyperscalers” (Laprince-Ringuet, 2024, p.5).

The AI Act is expected to receive final approval from the European Parliament on April 10th or 11th of this year. The enforcement of the law will be in effect 20 days following the approval. It will apply to the following risk groups with assigned periods of grace before fully enforced: Prohibited Systems (six months), GPAI (12 months), AI systems under Annex III (24 months), and AI systems under Annex II (36 months). Codes of Practice for all risk groups must be in place nine months following entry into force.

The AI Act has a stiff regiment of Associated Monetary Penalties (AMPS) from 7.5 million euros or 1.5% of global revenues to 35 million euros or 7% of global revenues, depending on the violation and size of the company. The Commission will be required to set up an AI Office and is currently mandated to begin staffing that operation (Note: currently, the suggested start dates for some of the job postings are “Autumn 2024” and on one-year contracts, expandable to six years) And while the penalty ranges have been established, the enforcement remains to be a weak link for a variety of different reasons.

The lack of framework enforcement within the NLF needs to be more specific. It needs to clearly state cooperation with member states where existing Cybersecurity/data protection authorities, enforcement laws and practices already exist. Currently, the Commission may bring new rules of compliance to the member states, with Article 5 prohibitions being the top priority within 60 days of noticed violations. Member states can bring Article 5 offenders (within 30 days) to the Commission. If the Commission determines one member state’s violation has an application across all 27 member states it may enforce a blanket recommendation across the entire EU. As noted in Article 66, “If the measure taken by the relevant Member States is considered justified by the Commission, all Member States shall ensure that appropriate restrictive measures are taken in respect of the AI system concerned, such as withdrawal of the AI system from their market without undue delay and shall inform the Commission accordingly.” (EU AI Act Article 66, Sec 2). The difficulty is that a violation and remedial actions in one Member State may not be consistent with the enforcement rules in another Member State. What may be deemed National Importance and application across all Member States may be more challenging to invoke with the current absence of detailed precedents and review processes. Additionally, other governing bodies with AI responsibilities, such as the European Union Agency for Cybersecurity (ENISA), the European Central Bank (ECB) and others, still need to be acknowledged within the ACT and considered when violations and notifications occur. The lack of consistency will invariably lead to additional audits, legal challenges and other escalations in compliance costs.

At the final plenary session of the European Parliament on March 13th, 2024, the EU Parliament voted in favour of the adoption of the AI Act. Over three years in the drafting process still leaves considerable doubt, as expressed by some participants. Axel Voss supports innovation and opportunities through the regulatory sandboxes and forthcoming technical standards and believes these will invariably support SMEs looking to develop models with regulatory oversight.(Voss, March 13, 2024, p.1) However, Voss is guarded in his support and stated “We have serious doubts if the product safety approach is conceptually capable to regulate evolving technology.” (Voss, March 13, 2024, p.1) Voss is concerned that the governance system is overly complicated and is on a similar twisted pathway that took place during the development of the GDPR. Voss adds, “Our AI developers will often not know how to comply with the AI Act and who to turn to if they face problems.” (Voss March 13, 2024, p.1). Similarly, the European Data Protection Supervisor (EDPS) issued a statement acknowledging their support for the AI Act but with numerous reservations. Suggesting the AI Act “as it stands now, could prove to be a missed opportunity.” (EDPS, March 13, 2024). EDPS is concerned with the generalities used in the enforcement language of the Act and suggests “their largely declarative nature, would inevitably lead to a divergent application of the convention, thus undermining legal certainty…” (EDPS March 13, 2024, p.1). It is very concerned about the ambiguities of the Act and “certain AI uses that could severely undermine human dignity and individual autonomy, human rights, democracy and the rule of law.” (EDPS March 13, 2024, p.2) Kai Zenner, Head of Office and Digital Policy Advisor to Axel Voss, stated, “There is no time for celebration, especially since the AI Act is far from perfect. What the EU needs right now is a collective effort of all involved public and private actors.” (Zenner, 2024, p.7)

The AI Act is a framework without precedent. Over the next 24-36 months, a consolidated effort will be required to complete a more definitive compliance framework that offers greater clarity for all organizations. The current shortcomings in the Ai Act will continually evolve and will be reliant on both private and public participation and member state cooperation. Finding the balance between protecting citizens while encouraging innovation may prove to be most challenging issue as the speed within which AI is moving may keep regulators one step behind for a very long time.

References

  1. Bertuzzi, Luca, March 13, 2024, The Final Plenary Meeting…; retrieved from LinkedIn on March 13, 2024
  2. Borak, Masha, February 16, 2024, Is the EU AI Act leaving a backdoor for emotion recognition?; Retrieved on March 11, 2024 from Biometric Update.com website, Is the EU AI Act leaving a backdoor for emotion recognition? | Biometric Update
  3. Chander, Sarah & Jakubowska, Ella, January 31, 2024, Council to vote on EU AI Act: What’s at stake?; Retrieved on March 4th, 2024 from the European Digital Rights (EDRI) website, https://edri.org/our-work/council-to-vote-on-eu-ai-act-whats-at-stake/#:~:text=Despite%20making%20very%20few%20compromises,relating%20to%20law%20enforcement%20agencies.
  4. Chee, Foo Yun, May 24, 2023, EU, Google to develop voluntary AI pact ahead of new AI rules, EU’s Breton says, Retrieved March 8, 2024 from Reuters website, https://www.reuters.com/technology/eu-google-develop-voluntary-ai-pact-ahead-new-ai-rules-eus-breton-says-2023-05-24/#:~:text=EU%20Commissioner%20for%20Competition%20Margrethe,she%20said%20in%20a%20tweet.
  5. CJEU Appeal; Bryer, Patrick vs. European Research Executive Agency (REA) Case C-135/22 P September 7, 2023, CURIA – Documents (europa.eu)
  6. Csernatoni, Raluca, March 6, 2024, Charting the Geopolitics and European Governance of Artificial Intelligence; Retrieved on March 6, 2024 from Carnegie Europe website, https://carnegieeurope.eu/2024/03/06/charting-geopolitics-and-european-governance-of-artificial-intelligence-pub-91876
  7. Elliott, David, March 1, 2024, AI: Will governance catch up with the tech in 2024?; Retrieved on March 8, 2024 from the World Economic Forum website, https://www.weforum.org/agenda/authors/david-elliott/
  8. European Commission AI Pact, March 6, 2024, AI Pact; Retrieved on March 8, 2024 from EU Commission website, https://digital-strategy.ec.europa.eu/en/policies/ai-pact
  9. European Data Protection Supervisor (EDPS) press release, EDPS Statement in view of the 10th and last Plenary Meeting of the Committee on Artificial Intelligence (CAI) of the Council of Europe drafting the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law; Retrieved from EDPS website, https://www.edps.europa.eu/press-publications/press-news/press-releases/2024/edps-statement-view-10th-and-last-plenary-meeting-committee-artificial-intelligence-cai-council-europe-drafting-framework-convention-artificial_en#:~:text=The%20EDPS%20is%20convinced%20that,for%20the%20developers%2C%20providers%20and
  10. The EU Artificial Intelligence Act, January 21, 2024, The AI Act Explorer | EU Artificial Intelligence Act
  11. Gentile, Heather, February 8, 2024, Preparing for the EU AI Act: Getting Governance Right; Retrieved on March 6, 2024 from IBM Website, https://www.ibm.com/blog/eu-ai-act/
  12. Kellerhals,Thierry, February 13, 2024, What’s the risk of not having a clean AI Governance in Place?; Retrieved on March 6, 2024 from the KPMG website, https://kpmg.com/ch/en/blogs/home/posts/2024/02/whats-the-risk-of-not-having-a-clean-ai-governance-in-place.html
  13. Leprince-Ringuet, Daphne, February 28, 2024, Mistral’s deal with Microsoft is causing controversy; Retrieved on March 6, 2024 from Sifted EU website, https://sifted.eu/articles/mistral-microsoft-deal-controversy
  14. Pouget, Hadrien & Zuhdi, Ranj, March 4, 2024, AI and Product Safety Standards Under the EU AI Act; retrieved on March 5, 2024 from Carnegie Endowment For International Peace website, https://carnegieendowment.org/2024/03/05/ai-and-product-safety-standards-under-eu-ai-act-pub-91870#:~:text=However%2C%20AI%20standards%20remain%20incomplete,the%20act%20will%20promote%20innovation
  15. Ryan-Mosley, Tate, December 11, 2023, Why the EU AI Act was so hard to agree on; retrieved on March 4, 2024 from MIT Technology Review website, https://www.technologyreview.com/2023/12/11/1084849/why-the-eu-ai-act-was-so-hard-to-agree-on/
  16. Thomas, Riya, March 6, 2024, EU AI Act: Charting a new course for AI governance and research; Retrieved on Mrach 6, 2024 from enago academy website, https://www.enago.com/academy/impact-of-eu-ai-act-on-research/
  17. Toews, Rob, March 10, 2024, AI Predictions for the Year 2030; Retrieved on March 10, 2024 from Forbes website, https://www.forbes.com/sites/robtoews/2024/03/10/10-ai-predictions-for-the-year-2030/?sh=1e03266e40bc
  18. Voss, Axel, March 13, 2024, After almost Three years of political negotiations…; Retrieved on March 13th from Axel Voss LinkedIn; https://www.linkedin.com/in/axel-voss-a1744969/?locale=en_US
  19. Zenner, Kai, February 15, 2024, Some personal reflections on the EU AI Act: a bittersweet ending; Retrieved on March 4, 2024 from Zenner’s LinkedIn Page, https://www.linkedin.com/pulse/some-personal-reflections-eu-ai-act-bittersweet-ending-kai-zenner-avgee/