[Webinar] 5/15 – See Galileo Protect live in action 🛡️
The EU AI Act, originally drafted in 2021, is close to being ratified by member countries and will soon impact businesses operating in the EU. This groundbreaking legislation establishes a risk-based tiered system that discerns between high-risk and general-purpose AI systems, providing clear directives for achieving compliance. Failing to comply with the act would force companies to steer clear of the region or restrict access to products — something firms grappled previously with the Digital Services Act and the Digital Markets Act. Time is of the essence for AI companies to prepare for the requirements. Is your team ready?
In this blog post, we'll delve into the details of the act, its prohibitions, requirements for high-risk AI companies, consequences of non-compliance, and how companies can meet the stringent requirements set forth by the EU AI Act, including key steps teams can take today.
The EU AI Act categorizes AI systems into four risk categories, which determine the level of regulation and oversight required.
AI systems that pose a significant threat to the safety, livelihoods, and rights of people will be prohibited.
Examples include:
AI systems that can have a substantial impact on the health, safety, and fundamental rights of a person. They will be subject to strict obligations before they can be put on the market, including adequate risk assessment and mitigation systems.
Examples include:
AI systems with limited potential for manipulation are only subject to transparency obligations such as:
Examples include:
AI systems that do not belong in any other category are considered to pose minimal or no risk. They will be subject to the least stringent regulations.
Examples include:
This risk-based approach aims to ensure that AI systems are safe and respect the fundamental rights of individuals, while promoting innovation in the AI sector.
Being classified as "high risk" under the EU AI Act has several implications for companies and AI teams. Article 43 outlines two procedures for conformity assessment that AI providers must choose from under the EU AI Act.
1. Conformity assessment based on internal control (Annex VI):
2. Conformity assessment based on quality management system and technical documentation assessment with the involvement of a notified body (Annex VII):
The Quality Management System Assessment of act contain the following components:
Regulatory compliance strategy
The quality management system must incorporate a strategy for regulatory compliance, encompassing adherence to conformity assessment procedures and management procedures for modifications to the high-risk AI system.
Design control and verification
Techniques, procedures, and systematic actions for the design, design control, and design verification of the high-risk AI system must be clearly articulated within the quality management system.
Development, quality control, and assurance
The system should define techniques, procedures, and systematic actions governing the development, quality control, and quality assurance of the high-risk AI system.
Examination, test, and validation procedures
The quality management system must outline examination, test, and validation procedures to be conducted before, during, and after the development of the high-risk AI system, specifying the frequency of these processes.
Technical specifications and standards
Technical specifications, including standards, are to be identified, and if relevant harmonized standards are not fully applied, the means to ensure compliance with the requirements should be detailed.
Data management systems and procedures
Robust systems and procedures for data management, covering data collection, analysis, labeling, storage, filtration, mining, aggregation, retention, and any other data-related operations preceding the market placement or service initiation of high-risk AI systems, are integral to the quality management system.
Risk management system
The risk management system outlined in Article 9 must be incorporated within the quality management framework.
Post-market monitoring
Procedures for the establishment, implementation, and maintenance of a post-market monitoring system, as per Article 61, are essential components of the quality management system.
Reporting of serious incidents
Provisions for procedures related to the reporting of a serious incident in compliance with Article 62 must be clearly defined.
Communication protocols
Guidelines for communication with national competent authorities, sectoral competent authorities, notified bodies, other operators, customers, or other interested parties should be established within the quality management system.
Record keeping
Systems and procedures for comprehensive record-keeping of all relevant documentation and information are imperative.
Resource management
The quality management system must address resource management, including measures ensuring the security of supply.
Accountability framework
An accountability framework delineating the responsibilities of management and other staff concerning all aspects specified in this paragraph is a fundamental requirement within the quality management system.
Non-compliance with the EU AI Act carries hefty penalties depending on the infringement's severity and the company's size.
While the EU AI Act outlines specific requirements for compliance, practical implementation often requires integrating advanced tools and implementing internal processes. Companies should consider the following aspects:
Compliance with the EU AI Act goes beyond a mere legal duty; it is a strategic imperative for companies utilizing AI systems, setting a precedent for forthcoming regulations globally.
Thankfully, you are not alone! By harnessing Galileo's suite of capabilities, businesses can effectively navigate the complex landscape of regulatory standards.
Galileo Prompt facilitates the rapid development of high-performing prompts, ensuring companies can swiftly build transparent and safe AI systems.
Galileo Monitor helps with a proactive stance on security and performance concerns by detecting any related potential issues such as hallucinations.
Galileo Finetune aids in the construction of diverse and high-quality datasets for making robust models.
By leveraging these compliance-centric tools, companies can meet regulatory requirements and build trustworthy AI for their customers and users. Request a demo today to begin your journey to compliance!
Working with Natural Language Processing?
Read about Galileo’s NLP Studio