Building trust in AI starts with strong data governance —
from metadata and stewardship to compliance and ethics.
1010
Data stewardship – Assigning clear accountability for data
assets
Metadata management – Maintaining clear data definitions
and lineage
Access control & security – Implementing robust role-based
permissions
Compliance monitoring – Ensuring alignment with
regulations (e.g., Vietnam Decree 13, PDPL, GDPR, PDPA)
Ethical AI guidelines – Embedding fairness, transparency,
and accountability in AI use
Trust and compliance in AI and data
do not rest on technology alone but
on strong governance foundations.
▪Organizations are moving toward a
holistic approach: strengthening
metadata and stewardship, ensuring
regulatory alignment (e.g., Vietnam
Decree 13, Vietnam's new Personal
Data Protection Law (PDPL), GDPR,
PDPA), and embedding ethical AI
principles.
▪Together, these practices show that
organizations view governance not
just as a compliance exercise but as
the foundation for secure, transparent,
and responsible AI adoption.
Key takeaways
Top governance priorities in use of Data & AI
▪Metadata management (25%) and data
stewardship (20%) emerge as the top
governance priorities, highlighting the critical
need for structured, transparent, and
accountable data practices.
▪Compliance monitoring (20%) and ethical AI
guidelines (18%) are also gaining traction,
signaling that organizations are balancing
technical governance with legal and ethical
safeguards, while access control & security
(16%) reinforces the technical backbone for
protecting sensitive data.
Organizations who
Lead with
governance
▪Understand and mitigate the risks of AI before they
materialize
▪Proactively identify high-value areas for AI deployment
▪Develop direct value generation through clear and
consistent innovation
▪Scale learnings and leading practices discovered in
early pilots
▪Accelerate ROI by protecting revenue in
addition to generating it
▪Adopt AI at scale and increase impact across
functions and BUs
▪Have a focused and consistent approach to
innovation and scaling
Organizations who
Do not lead with
governance
▪Adopt AI in silos which limits scale, ROI, and visibility
▪Create prioritization chaos, which results in fragmented
investment
▪Reactively address risks, damaging both reputation
and finances
▪Repeat mistakes across BUs throughout the AI
adoption lifecycle
▪Result in slow-paced innovation, culture
regression and reduced collaboration
Source: EY . “EY Insights Analysis - Responsible AI Solution Narrative”
By setting up the right governance
framework for AI, an organization can
amplify the positive outcomes of
innovation and mitigate potential risks,
especially considering the rapid pace
of technological innovation in this field.