- feature
- AUDIT & ASSURANCE
A New Frontier: CPAs as AI System Evaluators
Artificial intelligence has a trust problem. CPAs have the credibility and skill sets to address the issue.
Related
Creating an AI agent in ChatGPT
Using TEXTSPLIT to dissect Excel text strings
Using Excel’s TEXTBEFORE AND TEXTAFTER functions to easily tame messy data
Can we trust the algorithms at the core of artificial intelligence’s ability to learn and solve problems?
It’s a question that is becoming pertinent as businesses increasingly rely on artificial intelligence (AI) to automate tasks such as recruiting talent, predicting market trends, and assessing cybersecurity.
AI developers rarely grant access to check whether their proprietary software is reliable, secure, and devoid of harmful biases. But what if businesses could seek an independent third-party assessment of their AI software that can provide assurance regarding the efficacy of their controls?
Robert Seamans, professor at New York University’s Stern School of Business, suggested that so-called “AI auditors” could fill this emerging role.
“Imagine an AI system that helps in the [human resources] function for a large company … that has some software that goes through résumés,” said Seamans, who has studied how AI can affect various industries and professions. “The company would want to be assured … the AI system they’re relying on isn’t going to systematically bias in favor of or against certain protected groups.”
CPAs aren’t the only professionals who could perform these “AI audits.” But CPAs do follow professional standards intended to instill trust in the results of assurance engagements — be that to provide confidence to investors that a company’s financial records are accurate and fair or to provide confidence to banks that a company complies with loan covenants.
In fact, objectivity and independence guide CPAs in all their engagements; the profession’s tenet is to enhance reliability, transparency, and public confidence in information. A rigorous licensure regime oversees adherence to these principles in fact and appearance.
“There is this need to have trust and confidence,” said Carrie Kostelec, CPA, senior manager—Assurance & Advisory Innovation— Artificial Intelligence at the AICPA. “CPAs are really well positioned to fill that need because of the credibility and skill sets they have in order to provide these assurance engagements across a multitude of subject matters — the common denominator is the rigor and relevance of the assurance process itself to address relevant risks.”
WHAT’S AN AI AUDIT?
Kostelec said she can envision AI audits happening at the tech companies that develop algorithms and at the businesses using them. “I think that engagements where the subject matter is functionality are probably more likely at the developer. Deployer engagements may focus more on governance,” she said. “That said, this is still very much an emerging area.”
Much has yet to be determined by regulation, customers, other stakeholders, and demand. Even the name is still in play.
Danielle Supkis Cheek, CPA, cautions against calling these engagements AI audits and instead prefers “assurance over AI,” as “it helps define the difference of an auditor leveraging the use of AI in their financial statement audit.”
Even using the word “assurance” can be tricky. “In many cases, it’s a regulated term,” said Supkis Cheek, senior vice president, AI, analytics, and assurance at Caseware, a Toronto-based multinational company that provides accounting and auditing software. “We call it an ‘evaluation report’ because in a particular geography, you can misuse the term ‘assurance.'”
Whether they will be called AI audits, evaluations, assurance engagements, or examinations, they are likely to resemble certain aspects of System and Organization Controls (SOC) engagements. SOC engagements are performed by CPAs and are crucial for building trust with clients and stakeholders. These engagements evaluate a service organization’s internal controls over systems used to provide outsourced services. There are three types of SOC reports. SOC 1 reports examine controls at a service organization relevant to user entities’ internal control over financial reporting. SOC 2 and SOC 3 reports address controls relevant to the security, availability, and processing integrity of the systems the service organization uses to process users’ data and the confidentiality and privacy of the information these systems process, with SOC 3 being a general-use report.
“When you break apart what’s in those SOC reports, whether it’s SOC 2 or SOC 3, it has those foundational elements of considerations around security, how the organization is governing the development of the technology, [checking] the competency of the individuals charged with the development, and [identifying] what are the key metrics,” said Richard Jackson, CPA, EY Americas assurance chief technology officer and EY global and Americas assurance AI leader. Also, Supkis Cheek said, “When presenting the results to customers, most people take a page out of the SOC playbook” and create a report like a summary SOC 3.
EY, for example, has been working on enhancing its assurance offerings, including updating existing mechanisms like SOC reports or specific services and attestation offerings around the life cycle of an AI system from development to operation and retirement. EY also has developed its own in-house framework, which provides guidance on what to do when AI is in use at audit clients, Jackson said.
Auditors can also evaluate an AI system against a framework like the National Institute of Standards and Technology (NIST) AI Risk Management Framework or ISO/IEC 42001. These frameworks guide responsible development, implementation, and maintenance of AI systems pertaining to governance processes.
“We find that auditors look to those external frameworks but will need to supplement or augment them themselves, or with collaboration with software developers,” Jackson said. “If you’re using AI to help book your vacation versus using it to perform heart surgery, you have very different tolerances for the precision in how it operates. You’re focused on different attributes, and different parts of the framework are incredibly important.”
SAFEGUARDS AGAINST AI FAILURES
A frequent concern with AI is ensuring the system is continually running as intended. AI blunders, overreach, and hallucinations have frequently made headlines. Auditors may provide assurance on governance processes or the system’s functionality. “The sort of audits that would occur of those systems are to ensure you have the right safeguards in place, so there’s not some crazy answer that gets given to somebody,” Seamans said.
With no comprehensive framework, it can be hard to decide which criteria to track and measure. Before the examination, it’s the vendor that needs to consider its AI-associated risk profile and what exactly it’s trying to mitigate against, which will later help determine which criteria are applied to its systems, Supkis Cheek said. She calls this “scope negotiation.”
“If the U.S. doesn’t soon come up with its own set of standards, I think there will be a ton of specific, narrow-scope criteria that are industry-specific or technology-specific that will become best practices,” Supkis Cheek said.
“That’s one of the areas where we will see a lot more focus on in the months and years,” Jackson said. “How do all these different frameworks come together to create more of a coherent network that organizations can actually lean on?”
RISK PROFILES IN AI EVALUATIONS
AI risks include loss of privacy, hallucinations, cyberattacks and scams, loss of human autonomy, and devaluation of human effort, according to the MIT AI Risk Repository. As risks are determined, different elements of frameworks may become more relevant based upon the AI system being evaluated — and that’s where an auditor’s professional judgment will be critical.
“It goes back to the ability of the auditor to scope the audit, tailor the audit, and measure it to the risk assessment you made,” Jackson said.
As with any subject matter, CPAs are required through licensure to have the requisite subject matter expertise on their teams to perform these engagements, and in the case of AI, that includes an understanding of computer code, data security, and testing and monitoring the technology once it’s deployed. IT professionals may also be essential to the risk-assessment process, Jackson said. Working together, the AI auditor and IT professional could do a risk assessment that may help determine what each client’s controls should look like and subsequently tailor the audit response. It’s a “two-in-the-box mentality,” Jackson said.
“That IT competency, along with that competency of core audit skills, will help you actually arrive at the right answer,” Jackson added.
WILL EUROPE SET THE AI BENCHMARK?
All but five states and the District of Columbia have enacted AI laws, but the subjects run the gamut from AI-generated images in political advertising to the use of AI in government or health care.
Globally, Seamans sees one issue emerging: “What’s the definition against which we’re testing things?” His eyes are on Europe and their rollout of the EU Artificial Intelligence Act. “I think there’s a good chance that whatever gets developed in Europe will become the de facto standard.”
The law classifies AI systems by their risk to rights, public safety, and societal values. Through this risk-based approach, certain uses of AI are prohibited, and high-risk systems face stricter requirements.
And that’s why comprehensive legislation is important: “The real risk of biases in AI is that we take our predefined, institutional, unconscious, and all the different biases that humans carry … and we have the ability to automate those biases through the use of AI. That’s where you effectively have the capability to disenfranchise an entire class of people,” Supkis Cheek said. The EU AI Act, she said, prohibits the algorithms that disenfranchise groups of people.
Supkis Cheek predicts that any federal regulations could face a precarious rollout — like the European Union’s General Data Protection Regulation (GDPR) or the U.S. Supreme Court’s ruling in South Dakota v. Wayfair Inc., 585 U.S. 162 (2018).
“GDPR was the privacy standard for EU citizens … and even those outside of the boundaries of the EU had to comply. And Wayfair was a state-by-state changing of parameters of what you had to do for your state and local tax calculations,” Supkis Cheek said. “While they are two different concepts, both were very painful issues related to jurisdiction and geography that hit the profession pretty hard. And [AI regulations] have the opportunity to be both of these concepts near simultaneously.”
CPAs’ COMPETITIVE EDGE FOR AI ASSURANCE
Without proper controls, AI systems could amplify undesirable outcomes, but establishing the efficacy of AI systems and controls around the systems may provide a competitive edge.
“An organization’s ability to have trust and confidence in how they use AI is actually one of the competitive differentiators of a lot of organizations now,” Jackson said. “A lot of people focus on the threat of AI, but how do you lean on the competitive advantages of using AI? We believe that if you can come to that conversation with trust and confidence, it helps organizations accelerate the adoption.”
Offering assurance services around AI — and developing the specialized skills required to do so — not only expands a CPA’s expertise but also reinforces the profession’s core values of trust, integrity, professional skepticism, and adherence to rigorously developed standards.
“We have very definite standards that have to be followed to make sure that engagements are planned well, executed, and that people are using their judgment and professional skepticism,” Kostelec said. And “the standards we’re following have gone through due process.”
About the author
Jamie J. Roessner is a senior content writer at the AICPA. To comment on this article, contact Jeff Drew at Jeff.Drew@aicpa-cima.com.
LEARNING RESOURCES
The AI Advantage: Leveraging AI for Efficiency and Impact
Discover how AI-driven tools and strategies are revolutionizing workflows, improving decision-making, and driving business impact.
Dec. 2, 10–11:30 a.m. ET
WEBCAST
The AI-Ready CPA: Leading Finance Through Change
The finance profession is undergoing a profound transformation driven by AI, automation, and increasing market pressures. To stay ahead, CPAs must not only adapt to these changes but also lead the way in innovation and strategic decision-making.
CPE SELF-STUDY
Core Concepts of Artificial Intelligence for Accounting Professionals
Unlock the power of artificial intelligence with our comprehensive AI Core Concepts course. Gain a solid foundation in AI components.
CPE SELF-STUDY
For more information or to make a purchase, go to aicpa-cima.com/cpe-learning or call 888-777-7077.
MEMBER RESOURCES
Articles
