Shareholders have AI on the brain. This is clear from a recent Bloomberg News analysis of earnings calls from Nasdaq 100 companies, which found that mentions of AI increased 5x since 2021.
The technology comes with a lot of promise, good and bad. Experts point to its ability to supercharge production, identify patterns at turbo speed and alter the anatomy of work by universally injecting automation into jobs. It’s little wonder that investors want clarity on how AI, and generative AI in particular, will disrupt businesses.
As companies adopt AI at speed, regulators are racing to put accountability and governance in place. The EU recently developed an expansive draft of new guardrails for AI, with more governments likely to follow suit. The dual emphasis that policy makers and investors are placing on AI is putting it front and center on the CFO’s agenda.
“We’re seeing a significant amount of stakeholder discourse and scrutiny around the opportunities and risks associated with artificial intelligence,” says Paul Goodhew, Global Assurance Innovation and Emerging Technology Leader at Ernst & Young Global Limited. “This is why it’s so important that the CFO is fully up to speed with these risks. While AI appears to present almost unlimited use cases and opportunities, organizations need to ensure they have the right risk management framework in place, and the CFO has a central role to play here.”
To mitigate that risk and identify new opportunities, a CFO’s first order of business should be gaining transparency over how AI is being used across the company and identifying which use cases are worth the investment.
AI is only as successful as the data from which it draws—and CFOs are well placed to understand and leverage the data available while keeping an eye on governance and a measured use of resources and capital.
“Any CFO today is sitting at the nexus of where data intersects with what the finance function is doing and is uniquely positioned to look at the potential of AI in accelerating the transformation of their organization,” says Goodhew.
AI’s impact on the business is seemingly limitless.
“AI can help the wider business be more efficient, more accurate and able to process far more data than ever before, with use cases such as the analysis of data to better understand what markets the company can operate in,” says Goodhew.
Many organizations are also using AI to enhance their creative decision-making, particularly in the consumer goods sector, according to Richard Jackson, Global Assurance AI Leader at Ernst & Young LLP.
“Whether it’s branding, new product designs or marketing ideas, you can use AI to look at trends that are evident on social media and translate them into recommendations,” says Jackson.
While there are numerous applications for AI across an enterprise, finance leaders need to carefully weigh the risks of any AI initiative they deploy—or that an employee might deploy, with or without the organization’s knowledge.
“Being surprised that someone key in your organization was using generative AI to produce something that was affecting the financials can have serious negative ramifications on the business. Everything from compliance with regulation or laws to the company’s system of internal controls are at risk. These are the kind of challenges that need to be considered,” says Cale Whittington, EY Americas Assurance Digital Leader at Ernst & Young LLP.
Free online AI tools have abolished the barrier to entry for experimenting with generative AI. While this has proved to be a boon for innovation, it has also introduced new risks.

For example, there have already been many instances of companies relying on generative AI tools to automatically create and publish content without proper oversight. And sometimes, “AI hallucinations” appear, with inaccurate information included in AI-generated content.
Stakeholders that aren’t aware that certain content is AI-generated might not know to question it. Alternatively, employees might put a sort of blind faith in AI, assuming an infallibility that doesn’t exist.
“It’s important that AI is explainable to users in the organization, so they understand when and how they’re interacting with AI and how they should apply their judgment,” says Goodhew.
Aside from risks incurred by employees, there are also risks inherent in the data used to train AI models. Regulators across the globe are scrambling to address some of AI’s thornier issues, such as bias introduced into AI training models and the threats to privacy inherent in processing large swaths of customer data.
“One of the unique challenges of AI at the moment is that the technology is outpacing regulation and compliance,” says Jackson. “The fact that it’s out there and everyone can use it—the horse is already out of the gate.”
CFOs need to be agile to respond to the constant evolution of opportunities—and risks—that AI technology presents. Unfortunately, at many companies, early-stage AI exploration and implementation can resemble a scattered and fragmented landscape, with initiatives being trialed haphazardly across functions.

This is why Goodhew recommends that companies assess current efforts across their organization to develop and test AI, and put clear policies in place to guide how and when it is acceptable to use AI.
This is something the EY organization recently did. The firm has committed $1 billion to transform its assurance technology, with a significant focus on integrating AI to support its assurance professionals in assessing risk.
“To successfully develop and deploy any AI technology, organizations must prepare. Having the right infrastructure in place, including the appropriate governance, processes and testing capabilities, is critical,” says Goodhew.
The EY organization has been addressing this need as it develops AI-enabled technologies. A comprehensive global technology certification process is leveraged to help govern and manage the development and release of audit technology, inclusive of any AI-enabled capabilities. Additionally, the design of EY’s user experience helps EY professionals understand and adhere to guidelines for usage.
Finance leaders—like regulators—have a vested interest in managing the short- and long-term risks associated with AI, whether it’s the fallout from a bad AI output or the potential disruption to a company’s business model.
Although evolving quickly, AI is still in its early stages, which means that organizations are in a prime position to start building out the procedures and policies that will govern adoption. This is something that regulators are starting to demand.

Because there are so many unknowns around AI, it’s important that C-suites build a culture of professional curiosity and skepticism around AI initiatives.
“The landscape is shifting around you, so you need to make sure the organization is in dynamic risk-assessment mode,” says Whittington, adding that leaders, first and foremost, need to build a risk management framework that empowers internal stakeholders to continuously challenge the outcomes of AI.
Leaders should be asking questions about how and where data is being used, how the algorithms that crunch the data work, where bias might be introduced into the data and whether the outcomes from AI are reliable.
Where the regulatory landscape might be slow to draft legislation around AI usage, the market is stepping in to request transparency from organizations around how they are deploying the technology.
“It’s going to take an awful lot of time for policy makers to write laws at an appropriate level of detail that are globally consistent. But in the near term, we’re seeing real demand from consumers and businesses,” says Jackson.
Consumers want a clearer picture of when they’re interacting with AI, which decisions that might impact them are being made by a machine and how those decisions are being made. For example, a lot of companies are using AI for credit assessment. Some consumers might request insight into why an application was turned down. These stakeholder expectations may lead to demand from corporations for new types of services helping to provide confidence in the responsible use of AI.
“At the EY organization, we see huge opportunities for auditors to step in and help in that space, whether by providing assurance to management teams and their boards, or by being that third party that provides confidence around AI in the marketplace,” says Jackson.
While the technical nuances of AI exist outside the wheelhouse for most boards, calculating risk and potential business impact does not.

When considering where to start, boards should look at it as they would any other issue and start by assessing how AI impacts different parts of the business. As for those areas that Jackson admits are a “black box”—like how AI algorithms work and how organizations protect data and privacy—leadership needs to press technology providers to give them the full picture, particularly when dealing with sensitive financial information.
“My urge to CFOs would be to start looking at vendor relationships very carefully, and don’t become complacent on your service organization’s controls,” says Whittington, referring to reports that assess the risks of using a given third-party service provider.
Boards, meanwhile, should hold management accountable for the organization’s AI vendor relationships.
“Boards should be asking questions like, how has the organization identified which technology partners to work with?” says Jackson. “And what are the risks of working with some of those vendors?”
Creating the building blocks of a successful AI strategy—one that anticipates impending risks and regulations—can be a daunting prospect for C-suites and boards alike, particularly because so much about AI’s impacts remains unknown.
But for organizations that take the time to ensure that their AI adoption is both focused and measured, the potential benefits are endless.
“Yes, there is a risk element, but there’s also an upside,” says Goodhew. “Imagine how AI will help organizations offer new types of products and services in the future that maybe they can’t today. It’s important that we emphasize to the CFO and wider organization that these opportunities are associated with AI.”
The views reflected in this article are the views of the author(s) and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization.