In just a few short years, AI has gone from being the purview of a select group of tech leaders to becoming nearly ubiquitous across finance teams. According to KPMG’s 2023 AI in Financial Reporting survey, 65% of organizations are already using AI in some aspects of their financial reporting, and 71% expect AI to become a core part of their reporting function within the next three years.
The technology comes with a lot of promise and has been a big driver of innovation in the finance function. Using AI, finance teams can combine data sets—like pricing for point-of-sale units, transaction data, payment methods and customer demographics—to improve product offerings or create more dynamic predictive models.
Leveraging AI’s ability to comb through large swaths of public data, CFOs can gain more precise market insights and competitive intelligence. And by using AI to analyze data anomalies, finance teams are also better positioned to spot fraud.
“AI is like having a deeper data scientist right next to you as you work, which I think is very intriguing to finance leaders right now,” says Matthew Johnson, Audit Technology Assurance Leader at KPMG.
While business leaders are eager to explore the different capabilities that AI—and generative AI in particular—can bring to their organizations, many are taking a slow and steady approach to adoption. According to KPMG, 37% of finance leaders are still in the planning stages of their generative AI journeys.
Handing over decision-making to a machine is no small undertaking. Any number of issues—from biased data to algorithmic errors—can result in the technology making mistakes that can affect a company’s analysis, revenue, forecasts or even its reputation. But for business leaders that make the effort to put the right controls in place around the technology, the benefits can outweigh the risks.
Understanding AI’s inner workings
Deploying AI at scale means having trust in the decisions it makes—a leap of faith that is nearly impossible if business leaders don’t understand the inner workings of their AI projects.
According to Ed Moran, Head of Audit Innovation at KPMG, ensuring the explainability of AI decisions goes a long way toward trusting the technology to make the right business decisions. Because the technology is becoming more accessible and learns as it goes, he anticipates that data collected over time will become more representative, which will lead to more responsible decisions being made by “machines.”
Stripping bias from data and making AI more trustworthy are vital to ensure that the technology is deployed responsibly—an issue of increasing concern among both regulators and the general public.
“This whole responsible-AI issue will come to the forefront relatively quickly,” says Moran. “And if it’s not top of mind for business leaders now—and it should be—it will be soon.”
For leaders to understand how AI works and how it makes decisions, Moran emphasizes the importance of bringing multiple stakeholders from different business functions into the conversation early in the process.
“Not only should IT and data science have seats at the table, but so should legal, compliance and public affairs, because this is an emerging area for them, too,” he says.
Holding AI accountable
As more companies recognize the need for an explainable AI model, the appetite for third-party reviews of the AI environment is growing. According to KPMG’s survey, 65% of business leaders would like their external auditors to help them evaluate their organizations’ use of AI, and 51% of respondents think third-party attestation will be valuable.
Auditors are heeding the call, with many implementing advanced AI technologies to boost their auditing capabilities. KPMG, for example, recently made a $2 billion commitment to integrate next-level AI into its day-to-day business. To that end, 72% of respondents thought their external auditors were ahead.
Assurance leaders are also building out frameworks to help organizations adapt to the unique challenges brought about by AI. An important aspect of these frameworks is creating the governance and oversight mechanisms that ensure accountability for AI outputs.
According to Johnson, the responsibility for every AI decision shouldn’t rest solely on the shoulders of management; everyone in the organization should be versed in how their AI arrives at certain decisions, and they should be empowered to check the work of the machine, just as they would the work of a human colleague.
“AI is so literate that when you get an answer back, you might think, ‘That sounds pretty good. I’m going to trust the output.’ But employees have to understand that these things are not always accurate, AI should not be thought of as a truth machine.” says Johnson. “Instead, they should think of AI as a smart assistant, and apply the same controls, reviews and professional skepticism that they would with a human colleague.”
Upskilling for AI
However, employees are not always given the necessary training to enable them to check the work that AI is doing. According to a KPMG survey, only 12% of US finance executives think their workforce is adept when it comes to adopting generative AI.
As AI becomes a core component across more aspects of business, the technology will require new types of employee skills, in the same way that other transformational digital technologies—like cloud computing—required upskilling. Employees will need training to ensure that they are up to speed.
Despite the deeply rooted fear that AI could one day make humans obsolete in the workplace, survey responses suggest that the opposite might be true. According to KPMG’s survey, 55% of executives believe that AI will help grow their teams and augment their skills.
While there may be a shortage of educators experienced in the technology, Moran says that AI itself is uniquely positioned to teach employees about its inner workings and can use its vast data-processing capabilities to create lesson plans tailored to each individual. He equates this to being educated in a small-group session, with lots of individualized attention—an ideal learning environment that is difficult for corporations and even institutions of higher education to achieve.
“AI can actually help you get the requisite skills, any time of day or night,” says Moran. “This could be the best tutor that was ever invented.”
Next steps companies can take
Business leaders know that to stay competitive, they need AI. What some might not realize is that transformation doesn’t need to happen all at once. They should start by mapping out how AI is already being used across the organization. This first step will allow finance leaders to identify and prioritize where to implement AI. It’s also okay for organizations to test the waters and create a sandbox or “walled garden” environment that gives space for employees to experiment with the technology without putting the company at risk.
To ensure employees know how to use AI safely, finance leaders should create a framework that acts as a guide for how to safely and responsibly engage with the technology and communicate this guidance throughout the organization. To ensure the technology is being used in compliance with company policy, leaders also need to regularly audit AI usage.
Companies also need to make AI a core component in their talent management strategy. Making AI training a part of onboarding not only helps protect companies from the risks associated with the technology, it also makes them more attractive places to work. According to KPMG, half of business leaders think the use of AI will draw more talent (i.e., a more diverse set of skills) to financial reporting.
Johnson says he also thinks AI has the potential to bring more inclusivity to recruitment by identifying areas of bias in the hiring structure.
“With AI, you’ve got this analytic capability to make connections that are much more complex and multifaceted than ever before,” he says. “It’s correlation analysis on steroids, and that is extremely promising in the talent management space. So when we think about the potential that lies ahead with AI, there is a lot to be excited about.”
No matter where a company finds itself on its AI journey, adoption of the technology is reaching critical mass, and there is a competitive imperative to organizations prioritizing their AI initiatives. If business leaders set up the groundwork for responsible AI use now, the value they extract from the technology will last long into the future.