The EU AI Act for Leaders
What it is
Regulation by the European Union that is effective as of 1st August 2024. While AI offers businesses a lot of potential value, it also poses big risks and this legislation helps to ensure AI systems are used responsibly and ethically.
What is defined as an ‘AI system’ under the Act
As defined by the EU AI Act, an AI system is a machine-based system designed to operate with varying levels of autonomy and adaptiveness after deployment. These systems infer from the input they receive to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Examples of AI Systems in Business
Customer Service Chatbots: Automated systems interacting with customers on websites or social media.
Predictive Analytics Tools: Analysing historical data to forecast future trends and behaviours.
General Purpose AI Chatbots (e.g., ChatGPT): These chatbots are versatile systems capable of understanding and generating human-like text, used for a wide range of tasks including customer support, content generation, and virtual assistance.
Fraud Detection Systems: Monitoring transactions to identify potentially fraudulent activities.
Human Resource Management Tools: Used in hiring processes and employee management.
Key Highlights from the Act
Risk-Based Classification: AI systems are classified based on their risk to safety and fundamental rights.
Unacceptable Risk: AI systems that pose an unacceptable risk and are prohibited, such as those manipulating human behaviour to cause harm or biometric systems used for remote identification without safeguards.
High Risk: AI systems that significantly impact safety or fundamental rights, subject to strict regulations. This includes AI in critical infrastructures, education, employment, and law enforcement.
Limited Risk: AI systems with limited risk, subject to transparency obligations. Users must be informed when interacting with AI, such as chatbots.
Minimal or No Risk: AI systems posing minimal or no risk, like spam filters or AI-based video games, are largely exempt from regulation.
Transparency and Accountability: High-risk AI systems must meet rigorous transparency requirements, ensuring users understand how decisions are made.
Data Governance: AI systems must ensure high standards in terms of data quality and integrity.
Human Oversight: Emphasis on maintaining human control and oversight over AI systems.
Ethical Principles: AI systems should respect human dignity, personal autonomy, and fundamental rights.
Providers vs. Deployers: Obligations under the Act
The EU AI Act distinguishes between the obligations of businesses under five categorisations. The two that will likely be of most interest to businesses in determining their responsibilities are providers (those who develop and supply AI systems) and deployers (those who integrate and use AI systems within their operations).
General Purpose AI Chatbots
General Purpose AI chatbots, such as ChatGPT, Google Gemini, and Microsoft Copilot come under the scrutiny of the EU AI Act due to their widespread application and potential impact on user rights and safety.
Potential Responsibilities of a ‘Provider’ vs ‘Deployer’ of a General Purpose AI Chatbot
AI System: General Purpose AI Chatbot
Provider: Microsoft / OpenAI / Claude / Google
Responsibilities
Risk Assessments: Conduct comprehensive risk assessments to identify and mitigate potential risks associated with Copilot's use.
Documentation: Maintain detailed documentation on how the model is trained, including data sources, training methodologies, and performance metrics.
Safeguards: Implement safeguards to prevent misuse, such as refusing to generate content that is inappropriate, harmful, or violates company policies.
Transparency: Ensure transparency by providing users with information about how Copilot works, its limitations, and potential risks.
Continuous Monitoring: Continuously monitor Copilot's performance and gather user feedback to identify areas for improvement.
Updating and Improving: Regularly update the model to improve its safety, accuracy, and robustness, addressing any newly identified risks.
AI System: General Purpose AI Chatbot
Deployer: Business Using an AI Chatbot
Responsibilities
Integration and Use: Integrate the AI chatbot into their business operations, ensuring it is used in a manner compliant with the EU AI Act.
User Transparency: Inform employees and customers that they are interacting with an AI chatbot and provide explanations on its capabilities and limitations. This transparency is usually built-in to AI chatbots in messages like ‘AI-generated content may be incorrect’.
Data Protection: Ensure that all personal data processed by Copilot is handled in compliance with GDPR and other relevant data protection laws. Implement measures to secure data and maintain user privacy.
Risk Management: Implement internal policies and procedures to manage risks.
Human Oversight: Ensure adequate human oversight by having employees review the AI chatbot’s outputs, especially in critical areas such as content creation, customer interactions, and decision-making processes.
Training and Education: Provide training to employees on how to effectively and safely use AI chatbots. Ensure they understand the ethical implications and limitations of the AI system.
Feedback Mechanism: Establish a feedback mechanism for users to report any issues or concerns with Copilot. Use this feedback to continuously improve the system's deployment and performance.
Compliance: Regularly review and update their compliance measures to ensure they meet all regulatory requirements under the EU AI Act, including any changes or updates to the legislation.
Checking Your Compliance Responsibilities
To understand how the Act impacts your business, I recommend using this compliance checker and working with your legal team to be compliant.
These are six steps that can be considered as part of your efforts to be compliant:
Identify AI Systems: Catalogue all AI systems in use, including general-purpose AI chatbots, and assess their risk classification.
Documentation and Record-Keeping: Maintain detailed records of the AI systems' functioning, data sources, and decision-making processes.
Transparency Measures: Implement mechanisms to ensure transparency in how AI systems operate and make decisions.
Risk Management: Establish procedures for assessing and mitigating risks associated with AI systems.
Human Oversight: Ensure there is adequate human oversight of AI systems to prevent and address any unintended consequences.
Data Protection: Adhere to data protection regulations, ensuring the processing of personal data complies with GDPR and other relevant laws.
Book a call with me if you want help to upskill or transform your business with GenAI.