Introduction
As artificial intelligence (AI) rapidly evolves, the need for effective risk management has become a critical concern for both the United States (US) and the European Union (EU). AI presents vast opportunities across various sectors, such as healthcare, finance, defense, and logistics....
Introduction
As artificial intelligence (AI) rapidly evolves, the need for effective risk management has become a critical concern for both the United States (US) and the European Union (EU). AI presents vast opportunities across various sectors, such as healthcare, finance, defense, and logistics. However, it also poses risks related to privacy, bias, security, and ethical considerations. The US and the EU, as global leaders in technology, have each taken distinct approaches to AI risk management, shaped by their unique regulatory frameworks, political environments, and socio-economic priorities.
This essay explores AI risk management from the perspectives of the US and the EU, focusing on key principles, legislative efforts, and the broader regulatory ecosystems in which they operate. It also examines the balance between fostering innovation and protecting societal values, including privacy, security, fairness, and transparency. By comparing the strategies adopted by these two global powers, we can better understand the strengths and challenges of each approach and how they shape the future of AI governance.
1. The Concept of AI Risk Management
Before delving into the specific approaches taken by the US and the EU, it is important to understand what AI risk management entails. AI systems, by their nature, introduce a spectrum of risks that vary in severity, from potential harm to individuals or society to more abstract concerns about accountability and transparency. Effective risk management in AI involves identifying, assessing, and mitigating these risks through a combination of regulatory frameworks, technological solutions, and ethical guidelines.
1.1. Types of AI Risks
The risks associated with AI can be broadly classified into several categories:
Bias and Fairness: AI systems often learn from historical data, which can be biased, leading to unfair or discriminatory outcomes, especially in areas like hiring, lending, and law enforcement.
Privacy and Data Protection: Many AI systems, particularly those involved in machine learning, require vast amounts of data, raising concerns about how personal data is collected, stored, and used.
Security: AI can be exploited by malicious actors for purposes like cyberattacks, disinformation, and even autonomous weapons.
Transparency and Accountability: Complex AI models, especially those driven by deep learning, often operate as "black boxes," meaning their decision-making processes are difficult to interpret, raising concerns about accountability when things go wrong.
Autonomy and Control: Autonomous AI systems, such as self-driving cars or drones, introduce risks related to human control and responsibility in critical situations.
Economic and Social Impact: AI is likely to disrupt labor markets by automating jobs, which could lead to unemployment or wage suppression in certain sectors.
1.2. Key Principles of AI Risk Management
To manage these risks effectively, a set of guiding principles h
Size: 146.42 KB
Language: en
Added: Oct 15, 2024
Slides: 10 pages
Slide Content
EU and U.S. AI Risk Management A Comparison and Implications for the TTC Alex C. Engler Mastodon / Twitter: @ alexcengler Email: [email protected]
With support from:
AI Risk Management Subfield Examples AI for Human Processes / Socioeconomic Decisions AI in Hiring, Educational Access, Financial Services Approval AI in Consumer Products Medical devices, partially autonomous vehicles, Chatbots Sales or customer service chatbots on commerce websites Social Media Recommender & Moderation Systems Newsfeeds on TikTok, Twitter, Facebook, Instagram, LinkedIn Algorithms on Ecommerce Platforms Algorithms for search or recommendation of products and vendors on Amazon or Shopify Foundations Models / Generative AI Stability AI’s Stable Diffusion and OpenAI’s GPT-3 Facial Recognition Clearview AI, PimEyes , Amazon Rekognition Targeted Advertising Algorithmically targeted advertising on websites and phone applications
U.S. AI Risk Management E.O. 13859 Maintaining American Leadership in AI Mandatory agency regulatory plans, but ignored by all agencies except HHS AI Bill of Rights Non-binding, even “does not constitute government policy.” NIST AI Risk Management Framework Voluntary suggestions, guidance (official release is tomorrow) All have AI principles of principles: accuracy, safety, fairness, transparent, accountable, and mention explainability and privacy
U.S. AI Risk Management – Binding Agency Guidance EEOC requires transparency, non-discrimination, human oversight in AI hiring processes for people with disabilities; CFPB requires explanations for adverse actions (rejections/denials) of AI models in credit decisions FTC can enforce some data privacy, truth in advertising, commercial surveillance restrictions HUD tackling discrimination in property appraisal models
EU AI Risk Management – EU AI Act Subfield EU Policy Developments AI for Human Processes / Socioeconomic Decisions High-risk AI applications in Annex III of EU AI Act will need to meet quality standards, implement risk management system, and perform conformity assessment AI in Consumer Products EU AI Act consider AIs implemented within products already regulated under EU law to be high risk, must have standards incorporated into current regulatory process Chatbots EU AI Act will require disclosure that a chatbot is an AI (i.e. not a human) Facial Recognition EU AI Act will include some restrictions on remote facial recognition / biometric identification. EU Data Protection Authorities have fined facial recognition companies under GDPR Foundations Models / Generative AI Draft proposals of the EU AI Act consider quality and risk management requirements
EU AI Risk Management – Other Subfield EU Policy Developments Social Media Recommender & Moderation Systems EU Digital Services Act creates transparency requirement for these AI systems, also enables independent research and analysis Algorithms on Ecommerce Platforms EU Digital Markets Act will restrict self-preferencing algorithms in digital markets. Individual anti-trust actions (see Amazon case , or Google Shopping ) to reduce self-preferencing in Ecommerce algorithms and platform design Targeted Advertising GDPR enforcement, including EDPB fines against Meta for using personal user data for behavioral ads. The Digital Services Act bans targeted advertising to children and certain types of profiling (e.g. by sexual orientation). It also requires that targeted ads explanations and control over what ads they see.
Emerging Challenges AI in Consumer Products/Socioeconomic Decisions: EU standards bodies will have to simultaneously write standards for variegated set of AI applications, potentially in private. U.S. and EU alignment on “risk-based approach” does not resolve mismatch between U.S. agency authority and broad scope of AI Act AI in Platforms/Websites: The EU has passed legislation with significant implications for AI in social media, Ecommerce, and online platforms in general, while the U.S. does not appear yet prepared to do so. More platforms means more crossing international borders, including emerging platforms in education, finance, and healthcare, as well as business management software
TTC Developments – Three Workstreams in AI Risk TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management Pilot project on Privacy-Enhancing Technologies (PETs) A report on the impact of AI on the workforce, co-written by the European Commission and the White House Council of Economic Advisors
Policy Recommendations U.S. should enforce E.O. 13859, creating agency AI regulatory plans EU should create more flexibility in its AI definition, enabling adjustments to inclusion EU should take steps to open its standards setting process to the world, especially w.r.t. AI with greater extraterritorial impact U.S. and EU should collaborate on AI regulatory capacity (best practices, talent exchange, AI sandbox pilot, etc ).