TrustArc Webinar - The Future of Third-Party Privacy Risk: Trends, Tactics & Executive Insights

TrustArc 0 views 20 slides Oct 09, 2025
Slide 1
Slide 1 of 20
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20

About This Presentation

As organizations continue to rely on third parties to deliver critical services, the risks tied to vendor relationships around data privacy are growing more complex with high-stakes. From regulatory compliance to reputational risk, third-party oversight is no longer a checkbox activity; it’s a str...


Slide Content

© 2025 TrustArc Inc. Proprietary and Confidential Information.
The Future of Third-Party Privacy Risk:
Trends, Tactics & Executive Insights

2
Legal Disclaimer
The information provided during this webinar does
not, and is not intended to, constitute legal advice.
Instead, all information, content, and materials presented during
this webinar are for general informational purposes only.

3
Speakers
Janalyn Schreiber
Senior Privacy Consultant, TrustArc
TrustArc
Dareus Robinson
Product Counsel
Snapchat

Agenda

1.The Stakes: Why TPRM Matters
2.The Real Cost of Failure
3.The AI Factor:
○New Dimensions of Privacy & Vendor Risk
○New Dimensions of Privacy Compliance Obligations
4.How To: The Vendor Risk Lifecycle
5.Planning & Sourcing
6.Deeper Due Diligence
7.Risk Tiering
8.Monitoring & Change Management
9.Onboarding & Offboarding
10.AI in the Lifecycle
11.Partnering with Information Security
12.Educating Business Partners on Responsible AI Vendor
Selection
13.Key Takeways

5
The Stakes: Why Third-Party Risk Management Matters
●Increased reliance on vendors: SaaS providers,
AI-enabled services, expands attack surface and
privacy exposure.
●35% of breaches in 2024 tied to third parties*
●AI expands risk: opaque models, hidden data use,
hallucinations
●C-Suite and regulators expect continuous oversight

Vendor oversight has moved from "nice
to have" to strategic necessity.



* SecurityScorecardʼs 2024 Global Third-Party Cybersecurity Breach Report

6
The Stakes: Why Third-Party Risk Management Matters
●Privacy laws explicitly reinforce this
accountability:
●GDPR: Controllers must ensure processors provide
sufficient guarantees; joint liability is possible.
●CCPA/CPRA: Service providers and contractors
must have contracts limiting data use and
enabling oversight.
●Other U.S. State Laws (e.g., Colorado, Virginia,
Connecticut): Mirror GDPR-like obligations for
contracts and risk management.

Translation: Regulators and
customers donʼt care if it was “your
vendorʼs fault.”

7
The Real Cost of Failure
Regulatory Fines:
●GDPR fines for insufficient vendor oversight (e.g., relying on risky
processors).
●CPRA penalties for failing to have proper service provider contracts
in place.
●EU AI Act will impose obligations on users of high-risk AI systems,
not just vendors.

Reputational Damage: Breach notifications and press coverage damage
trust — often permanently.

Remediation Costs: Breach response, credit monitoring, class actions, and OCR/state AG investigations.
●The MOVEit third-party vulnerability impacted over 2,700 organizations worldwide, costing millions in
downstream breach notifications.
●Even with contracts, these organizations faced the headlines and lawsuits.

Bottom Line: Third-party failures can result in both privacy and security fallout, and both
are expensive.

8
The AI Factor: New Dimensions of Privacy & Vendor Risk


AI-Driven Privacy Risks:
•Vendors may use your data for model training without consent
(GDPR and CPRA both restrict this).
•Lack of transparency into AI models creates challenges for
accountability.
•Data minimization and purpose limitation can be undermined by
AI “function creep.”
•New Risks: hidden data use, hallucinations, adversarial input,
opaque models, lack of explainability, etc.
From Checkbox to Continuous:
•Annual vendor questionnaires are no longer enough; regulators expect living oversight.
C-Suite & Board Attention:
•Privacy teams must express risks in business terms, not just compliance jargon.
Pressure for Speed:
•Procurement and business units push for rapid onboarding.

End Result: Privacy teams must balance rigor with agility.

9
The AI Factor: New Dimensions of Privacy Compliance Obligations




EU AI Act Considerations:
•Users of high-risk AI systems must ensure compliance, even if the
vendor builds the system.
•Transparency, human oversight, and conformity assessments will be
mandatory.
US Lens:
•CPRA and emerging state laws emphasize consumer rights (access, deletion, opt-outs) that AI vendors
must respect.
•FTC has used its Section 5 authority to crack down on unfair or deceptive AI claims via Operation AI
Comply.

New Reality: Our vendor risk lens must now expand to AI-specific risks.

10
How To: The Vendor Risk Lifecycle
Planning / Strategy Sourcing / RFP Due Diligence
Risk Tiering
Contract Negotiation
Ongoing Monitoring
Change
Management
Onboarding &
Offboarding

11
Planning & Sourcing


Planning / Strategy
●Define acceptable risk thresholds (e.g., what is a “no-go” risk)
●Align risk criteria with laws, industry standards, and board
expectations
●Identify categories of vendors (cloud, SaaS, AI, data processors,
etc.)
●Decide tiering logic (sensitive data, criticality, AI usage)
Sourcing / RFP
●Require vendor disclosure of AI use and sub-processors up front
●Screen out high-risk vendors without baseline certifications (SOC 2, ISO, etc.)
●Use standard RFP questions on security/privacy/AI governance
●Engage Privacy + InfoSec together to score vendors early

New Reality: Our vendor risk lens must now expand to AI-specific risks.

12
Deeper Due Diligence




What to Ask, What to Validate
●Ask for evidence, not just yes/no answers.
●AI-specific disclosures: training data, governance, monitoring,
red-teaming.
●Review certifications: SOC 2, ISO, privacy seals.
●Check data flow diagrams, subprocessors, and cross-border
transfers.

Contract & Safeguard Requirements
●Breach notification obligations (timebound).
●Right to audit and inspection.
●Subprocessor approval and flow-down clauses.
●Data return, deletion, and segregation requirements.
●AI/ML clauses: training data limits, transparency, oversight.

13
Risk Tiering




●Apply a consistent scoring model (data sensitivity + access
+ AI + criticality)
●Classify into Low / Medium / High tiers
●High Risk: Sensitive data + AI features
●Medium Risk: Sensitive data, no AI
●Low Risk: Non-sensitive data, limited access
●Define required controls by tier
●Tiering determines monitoring frequency and contract
terms
●Document rationale for tiering (defensible if audited)

14
Monitoring & Change Management






Ongoing Monitoring
●Periodic reassessments by vendor tier
●Triggered reviews after incidents or AI expansion
●Maintain vendor change log
●Stage rollouts for new features
●Conduct lessons-learned post-mortems

Change Management
●Require advance notice of service or AI-related changes
●Reassess vendor risk if they pivot business models or add
new sub-processors
●Update contracts or risk tiering when changes occur
●Pilot or stage rollout of risky new vendor features

15
Onboarding & Offboarding
Onboarding
●Limit access to the least privilege necessary
●Confirm security testing of integrations before go-live

Offboarding
●Confirm data return or certified deletion
●Terminate all access credentials and integrations
●Verify subprocessor offboarding as well

16
AI in the Lifecycle

●Sourcing: Request AI disclosures (training data,
governance)
●Due Diligence: Validate evidence (red-teaming,
monitoring)
●Contracts: Insert AI transparency and data use clauses
●Monitoring: Trigger reviews when AI features are added
●Offboarding: Confirm AI-related data deletion and model
retraining limits

17
Partnering with Information Security




Building the Privacy–InfoSec Partnership:
●Privacy ensures data use is lawful, fair, and
rights-respecting.
●InfoSec ensures technical safeguards, monitoring, and
resilience.
●Together, they create a full-spectrum vendor risk view.
Practical Actions:
●Integrate AI-specific questions into due diligence and
contracts.
●Establish joint vendor review processes (privacy +
security).
●Align reporting dashboards so leadership gets a single
vendor risk picture.
●Share intelligence (privacy law updates + threat feeds)
for proactive risk management.

18
Educating Business Partners on Responsible AI Vendor Selection






Why It Matters
●AI vendors may repurpose data for training, introduce
bias, or create opaque risks
●Business partners often prioritize speed over
governance — education is key

How to Deliver
●Host short training sessions during procurement cycles
●Embed Privacy + InfoSec into early-stage vendor
selection
●Provide reusable templates that make it easy for
partners to “ask the right questions”

19
Key Takeaways






●Balance Rigor vs Speed!
●Third-party and AI vendor risk is strategic, not just
operational.
●Tier your vendors — donʼt treat them equally
●Integrate AI clauses into every new contract
●Build joint dashboards with InfoSec, engage the
Business
●Trigger reviews when vendors pivot to AI
●Think lifecycle, not point-in-time

20
Thank You!