
From Guardrails to Growth: How Governance Fuels AI Innovation
By John McLain, EY Americas Technology Risk AI Leader and EY Americas Assurance AI Deputy Leader
As AI becomes integrated into enterprise workflows, executives around the globe continue to invest in using AI responsibly, though many report that adoption is advancing faster than their ability to govern it.
In this complex environment, strong governance — not just speed to market — will emerge as a key differentiator. Organizations poised to lead with AI innovation understand that governance helps make AI secure, reliable and trusted, while demonstrating responsibility to customers and regulators.
Surveys indicate that this mindset is taking hold. In a December 2025 AI Pulse Survey, 68% of senior leaders whose organizations are investing in AI said their emphasis on ethical AI operations will rise in the coming year, an increase from 60% reported previously. Separately, the EY Technology Risk Pulse Survey found that more than three in four organizations are aligning their governance with recognized standards, including the NIST AI Risk Management Framework and emerging AI regulation and the European Union’s Artificial Intelligence Act.

Governance is the building block of responsible AI. It provides the oversight and accountability to help organizations identify AI use cases, manage AI behaviors and intervene quickly when systems drift.
Governance processes should include strategy, operating model, lifecycle phases, policies, an AI inventory of AI tools, performance monitoring, training and reassessment ⎯ and be tailored to the level of risk appropriate for the specific industry, technology and use cases. Success requires a holistic effort and collaboration across executive leadership, risk and compliance, data scientists, legal, internal audit, and business stakeholder teams.
Organizations that struggle to get started with governance often cite concerns about data readiness, unclear ownership and insufficient training. A common gap in practice is the failure to integrate AI governance with enterprise risk management. This can result in AI tools being developed and deployed without clear oversight, duplicative efforts among teams creating similar AI solutions independently, or missed opportunities to scale what works.
Overcoming these challenges is an ongoing effort, and many are advancing. In an EY Global Responsible AI Pulse survey, 80% of respondents said their organization had defined a clear set of responsible AI governance principles.
Despite these intentions and efforts, findings of the EY Global Responsible AI Pulse survey revealed that nearly every company polled had already suffered financial losses from AI-related incidents.
That is why governance assessments are crucial at every stage of the AI lifecycle, from design and data sourcing to deployment and ongoing monitoring. Organizations need to document assumptions, evaluate models and the surrounding systems, strengthen data governance practices and accountability structures, monitor model behavior and manage third-party risks. Because AI must perform under pressure, testing should also incorporate adversarial scenarios.
Assurance offers executives reliable evidence that AI risks have been identified, monitored and managed. This empowers teams to move faster, enables executives to make bolder decisions with confidence and builds trust in the capital markets. AI with governance becomes a catalyst for growth and a source of competitive advantage.
The views reflected in this article are those of the author and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization.