Navigating the New EU AI Act (Data & Analytics)
MirkoPeters
1,279 views
19 slides
Aug 09, 2024
Slide 1 of 19
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
About This Presentation
for businesses deploying AI technologies.
Key Aspects of the EU AI Act:
Risk Categorization:
Unacceptable Risk: Prohibited practices include social scoring, subliminal manipulation, and untargeted biometric data scraping.
High Risk: AI systems used in critical sectors like recruitment, medical de...
for businesses deploying AI technologies.
Key Aspects of the EU AI Act:
Risk Categorization:
Unacceptable Risk: Prohibited practices include social scoring, subliminal manipulation, and untargeted biometric data scraping.
High Risk: AI systems used in critical sectors like recruitment, medical devices, and biometric identification must comply with strict regulations, including conformity assessments, documentation, and EU database registration.
Transparency Risk: AI systems like chatbots and deep fakes must ensure transparency, informing users about AI interaction.
Minimal or No Risk: These systems are generally permitted with possible voluntary codes of conduct.
Compliance for High-Risk AI Systems:
Providers must ensure high standards in data quality, transparency, human oversight, and cybersecurity. They are also required to undergo conformity assessments, manage risks effectively, and maintain records in an EU database. Compliance is essential to mitigate risks to users and affected individuals.
Obligations for Non-High-Risk AI Systems:
Even AI systems not categorized as high-risk have transparency obligations. Users must be informed when interacting with AI, and generative AI content should be clearly labeled as artificially generated.
Special Rules for General Purpose AI Models (GPAI):
GPAI models, especially those with systemic risks, face additional requirements like comprehensive documentation, risk assessments, and transparency obligations. Free and open-source models are generally exempt unless they pose systemic risks.
Governance and Enforcement:
The EU AI Act establishes a governance structure that includes national authorities, the European AI Office, and the European Artificial Intelligence Board. These bodies oversee the application of the Act, enforce compliance, and conduct market surveillance.
Impact on Fundamental Rights:
The Act mandates assessments on how high-risk AI systems may impact fundamental rights. These assessments evaluate potential harms and outline necessary human oversight and risk mitigation measures.
Data Governance and Ethical AI:
Strong data governance is a cornerstone of the EU AI Act. It emphasizes data quality, bias detection, transparency, and adherence to data protection laws. Organizations must implement sound data management practices and foster a culture of accountability to ensure compliance.
Conclusion:
The EU AI Act introduces a significant regulatory shift for AI in the European Union. By categorizing AI systems by risk, it aims to promote trustworthy AI and protect fundamental rights. Businesses must align their AI systems with these regulations to ensure they are compliant, ethical, and transparent. Proactive measures are crucial for avoiding penalties and maintaining a positive reputation in the evolving AI landscape. The phased implementation approach allows businesses time to adapt, but early action is essential for long-term success.
Size: 1.25 MB
Language: en
Added: Aug 09, 2024
Slides: 19 pages
Slide Content
EU AI Act
Effective Data Leadership
August 2024
EU AI Act – follows a risk-based approach
Unacceptable risk
e.g. social scoring, untargeted
scraping
High risk
e.g. recruitment, medical devices
‘Transparency’ risk
‘Impersonation’ (chatbots),
deep fakes
Minimal or no risk
Prohibited
Permitted subject to compliance
with AI requirements and ex-
ante conformity assessment
Permitted but subject to
information/transparency
obligations
Permitted with no restrictions,
voluntary codes of conduct
possible
*Not mutually
exclusive
EU AI Act – A restricted set of particularly harmful AI practices are prohibited
Unacceptable Risk
Subliminal, manipulative techniques or
exploitation of vulnerabilities
to manipulate people in harmful ways
Social scoring for public and private purposes leading to detrimental or unfavourable treatment
Biometric categorisation
to deduce or infer race, political opinions, religious or philosophical beliefs or sexual
orientation, exceptions for labelling in the area of law enforcement
Real-time remote biometric
identification
in publicly accessible spaces for law enforcement purposes – with narrow exceptions
and with prior authorisation by a judicial or independent administrative authority
Individual predictive policing
accessing or predicting the risks of a natural person to commit a criminal offence based
solely on this profiling without objective facts
Emotion recognition in the workplace and education institutions, unless for medical or safety reasons
Untargeted scraping of the internet or CCTV for facial images to build-up or expand biometric databases
EU AI Act – High-risk AI systems must ensure compliance with specific rules
Obligations for providers of high-risk AI systems:
•Trustworthy AI requirements such as data quality,
documentation and traceability, transparency, human
oversight, accuracy, cybersecurity and robustness
•Conformity assessment before placing the AI system on the
market, to demonstrate compliance
•Quality and risk management systems to minimise risks for users
and affected persons and to ensure compliance
•Registration in an EU database
This will be subject to enforcement to ensure that the high risk is
effectively addressed.
High-risk use cases defined in Annexes II
(embedded AI) and III:
Some examples from Annex III are related to
•Certain critical infrastructures such as road traffic,
supply of water, gas, heating and electricity
•Education and vocational training, e.g.
to evaluate learning outcomes
•Employment, workers management, e.g. to analyse job
applications or evaluate candidates
•Access to essential private and public services and
benefits, credit scoring
•Remote biometric identification, categorization,
emotion recognition; Law enforcement; border
management; administration of justice and democratic
processes
EU AI Act – High-risk AI systems will have to comply with specific rules
•AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
•the AI system is intended to be used as a safety component of a product, or the AI system is
itself a product, covered by the Union harmonisation legislation listed in Annex I.
•the product whose safety component pursuant to point (a) is the AI system, or the AI
system itself as a product, is required to undergo a third-party conformity assessment, with
a view to the placing on the market or the putting into service of that product pursuant to
the Union harmonisation legislation listed in Annex I.
1. High-risk AI systems embedded in products covered by Annex I
EU AI Act – High-risk AI systems will have to comply with specific rules
•Biometrics: Remote biometric identification, categorisation, emotion
recognition
•Critical infrastructures: e.g. safety components of digital infrastructure, road
traffic
•Education: e.g. to evaluate learning outcomes, assign students in educational
institutions
•Employment: e.g. to analyse job specifications or evaluate candidates, promote
or fire workers
•Essential private and public services: determining eligibility to essential public
benefits and services, credit-scoring and creditworthiness assessment, risk
assessment and pricing in health and life insurance
•Law enforcement
•Border management
•Administration of justice
2. High-risk (stand-alone) use cases listed in Annex III
Filter Mechanism
Excludes systems from the high-risk
list that:
•perform narrow procedural tasks;
•improve the result of previous
human activities;
•do not influence human
decisions; or
•do purely preparatory tasks
N.B. Profiling of natural persons
always high-risk
EU AI Act – Obligations of providers and deployers of high-risk AI systems
•Risk management system to minimise risks for deployers and affected persons
•Trustworthy AI requirements: data quality and management, documentation and traceability, transparency
and information to deployers, human oversight, accuracy, cybersecurity and robustness
•Conformity assessment to demonstrate compliance prior to placing on the market
•Quality management system
•Register standalone AI system in EU database (listed in Annex II)
•Conduct post-market monitoring and report serious incidents
•Non-EU providers to appoint authorised representatives in the EUProvider obligations
•Operate high-risk AI system in accordance with instructions of use
•Ensure human oversight: persons assigned must have the necessary competence, training, and authority,
Monitor for possible risks and report problems and any serious incident to the provider or distributor
•Public authorities to register the use in the EU database
•Inform affected workers and their representatives
•Inform people subjected to decisions taken or informed by a high-risk AI system and, upon request, provide
them with an explanation
Deployer obligations
EU AI Act – Rules for AI systems which are not high-risk
Transparency requirements for certain AI systems (Art. 50)
•Notify humans that they are interacting with an AI system unless this is evident
•Design generative AI so that synthetic audio, image, video or text content is marked in a machine-readable format
and detectable as artificially generated
•Deployers to label as artificially generated:
•deep fakes (audio, image, or video unauthentic content)
•text (if published with the purpose of informing the public on matters of public interest
•Notify humans that emotion recognition or biometric categorisation systems are applied to them
Possible voluntary Codes of Conduct (Art. 95)
•No mandatory obligations, but possibly for voluntary application of the EU AI Act requirements to non-high risk
•Possibility for voluntary application of other requirements (e.g. environmental and social sustainability)
EU AI Act – New special rules for General Purpose AI Models (GPAI-M)
All GPAI
(Lower Tier)
GPAI with Systemic Risks
(Higher Tier)
•Information and documentation requirements, mainly to achieve transparency for downstream
providers
•Policy to respect copyright and a summary of the content used for training purposes
•Free and open-source models are exempted from transparency requirements, when they do
not carry systemic risks except from the copyright-related obligations
•At least 10 ^ 25 FLOPs or designed by the AI Office (e.g. based on user count)
•All obligations from the lower tier + state-of-the-art model evaluations (including red teaming /
adversarial testing), risk assessment and mitigation, incident reporting, cybersecurity and
additional documentation
EU AI Act – A comprehensive governance structure for effective enforcement
National Competent Authorities
•Supervising the application and implementation
regarding high-risk conformity and prohibitions
•Carrying out market surveillance, EDPS for Union entities
European AI Office
(Established within the Commission)
•Developing union expertise and capabilities in the field of
artificial intelligence, implementation body
•Enforcing and supervising the new rules for GPAI models,
including evaluations, requesting measures
European Artificial Intelligence
Board
•High-level representatives of each
MS, advising and assisting the
Commission and MS
Advisory Forum
•Balanced selection of
stakeholders, including industry,
SMEs, civil society, academia
•Advising and providing technical
expertise
Scientific Panel
•Pool of independent experts
•Supporting the implementation
and enforcement as regards GPAI
models, with access by Member
States
EU AI Act – The impact on fundamental rights has to be assessed
The use of a high-risk AI system may produce an impact on fundamental rights after deployment. Prior to first use,
some deployers must do a fundamental rights impact assessment for Annex III systems (except critical infrastructure)
➢Deployers’ processes, in which the high-risk AI system is intended
to be used.
➢Categories of natural persons and groups likely to be affected by its
use in the specific context.
➢Specific risks of harm likely to impact the affected categories of
persons or group of persons
➢Description of human oversight measures
➢Measures to be taken in case of materialisation of risks
Consisting of an assessment of: Consisting of an assessment of:
Deployers that are:
•Bodies governed by public law
•Private operators providing public services
•Certain other private providers (credit
scoring/credit worthiness assessment of health
and life insurance)
EU AI Act – Enters into application via a gradual, phased approach
AI Act
entry into
force *
1 August 2024
6 months 12 months 24 months 36 months
Prohibited systems
General-purpose AI
model rules apply
High-risk rules apply
(Annex III)
All other rules
of the AI Act apply
High-risk rules apply
(Annex II)
*Following its adoption by the European Parliament and the Council, the AI Act shall
enter into force on the twentieth day following that of its publication in the official
Journal. This is scheduled for 1 August 2024.
Data governance – Derives from relationship between data & AI strategies
Data quality – a key determinant of trust
End-to-End AI Governance – Ensuring Trustworthy AI
15
Corporate
strategy
EU AI Act Internal
Policies &
Procedures
Portfolio
Management
Delivery
Approach
Programme
Oversight
Technology
Roadmap
Sourcing Change
Management
9-Step Process
Business
Data &
Knowledge
Product or
Service
Design
Data
Extraction
PreprocessingModel
building
Assessment
and review
Model
integration &
impact
Transition &
execution
Ongoing
monitoring
Operational
Support
Compliance
& Internal
Audit
Key
1. Strategy
2. Planning
3. Ecosystem
4. Development
5. Deployment
6. Operate & Monitor
Three Lines Model – Establishing AI Governance for EU AI Act
16
Ethics Board
Third
-
Party Assurance Providers
EU AI Act
Projects Team - Creators &
Executors (1st Line)
Managers & Supervisors (2
nd
Line)
Internal (3
rd
Line)
Scheduled reviews of controls and
processes.
Ethicists (3
rd
Line)
Scheduled ethical review of decisions
and alignment with company strategy,
culture and mission
Operations – Monitoring (1
st
Line)
The deployment team tracks for decay in for decay
in performance and maintenance needs
Model Development & Data Use
Model developers or owners must follow appropriate standards for data
preparation, model development, testing, bias checks and maintenance.
Use cases must be aligned with overall organizational strategy
Risk Assessment
The assessments evaluate the potential risk associated with the model
and use cases, which impacts downstream governance requirements
Process Documentation
The project teams document the model development process using a
standard template
Model Source Data
Governance of data and
privacy life cycle.
Citizen
Developers
3
rd
Party
Tool
Custom
Model
Standards & Specifications
Business leadership defines the acceptance criteria for a
model and outlines the overall requirements.
Independent Review & Challenge
Independent review group evaluates model design and
tests model performance, and creates review memo to
document the testing results, risks and mitigants
Final Approval
Leadership approve the model validation review memo
prior to model delivery
Ongoing Review
Scheduled monitoring and sign-off on modifications,
replacements or retiring of modes, software and data
Effective data leadership under Article 10 – Key responsibilities
1. Data Governance and
Management
•-Implement robust data
governance practices to
ensure data quality and
integrity.
•-Oversee data collection,
preparation, and processing
operations, ensuring they
meet the quality criteria
specified in the Act.
2. Bias Detection and
Mitigation
•-Conduct thorough
assessments to identify and
mitigate biases in data sets.
•-Ensure that data used for
training, validation, and
testing is representative and
free of errors to the best
extent possible.
3. Transparency and
Documentation
•-Maintain detailed records of
data sources, processing
activities, and any
assumptions made during
data preparation.
•-Provide clear and
comprehensive summaries of
the data used for training AI
models, especially for
general-purpose AI systems.
4. Compliance with Data
Protection Laws
•-Ensure that all data
processing activities comply
with relevant data protection
regulations, including GDPR.
•-Implement safeguards for
processing special categories
of personal data, ensuring
privacy and security.
Effective data leadership under Article 10 – Best practices
1.Establish Clear Data Policies
•Develop and enforce data policies that align with the EU AI Act's
requirements.
•Regularly review and update these policies to reflect changes in
legislation and best practices.
2.Foster a Culture of Accountability
•Promote a culture of accountability within the organization,
ensuring that all team members understand their roles in data
governance.
•Provide training and resources to support compliance efforts.
3.Leverage Technology for Compliance
•Utilize advanced data management tools to automate compliance
checks and ensure data quality.
•Implement AI-driven solutions for real-time monitoring and
reporting of data governance activities.
4.Engage with Stakeholders
•Collaborate with internal and external stakeholders, including
data protection authorities, to ensure compliance and address
any concerns.
•Participate in industry forums and initiatives to stay informed
about emerging trends and regulatory updates.
Thank you!
19 [email protected]
Email
+44(0)7535 994 132
Phone
https://www.ai-and-partners.com/
Website
LinkedIn: https://www.linkedin.com/company/ai-&-partners/
Twitter: https://twitter.com/AI_and_Partners
Social Media