WASHINGTON — Acting Comptroller of the Currency Michael Hsu on Thursday said artificial intelligence providers and end-users like banks should develop a framework of shared responsibility for errors that stem from adoption of AI models.
“In the cloud computing context, the ‘shared responsibility model’ allocates operations, maintenance, and security responsibilities to customers and cloud service providers depending on the service a customer selects,” he said. “A similar framework could be developed for AI.”
Hsu said the
The statements came in a speech before the 2024 Conference on Artificial Intelligence and Financial Stability hosted jointly by the Financial Stability Oversight Council and the Brookings Institution. Hsu remarks came only hours after the
As the financial industry’s interest in artificial intelligence and machine learning grows, banking regulators
Hsu likened AI’s adoption in the financial sector to the trajectory of electronic trading 20 years ago, whereby the technology begins as a novelty, then becomes a more trusted tool before emerging as an agent in its own right. In AI’s case, Hsu said, the technology will provide information to firms, then it will assist firms in making operations faster and more efficient before it ultimately will graduate to autonomously executing tasks.
He said that banks need to be wary of the steep escalation of risk involved as they progress through these stages. Establishing safeguards — or “gates” — between each stage will be essential to ensure that firms can pause and evaluate AI’s role before that role is expanded.
“Before opening a gate and pursuing the next phase of development, banks should ensure that proper controls are in place and accountability is clearly established,” Hsu said. “We expect banks to use controls commensurate with a bank’s risk exposures, its business activities and the complexity and extent of its model use.”
Hsu also touched on the financial stability risks of AI, saying the emerging technology is significantly increasing the ability of
“Say an AI agent … concludes that to maximize stock returns, it should take short positions in a set of banks and spread information to prompt runs and destabilize them,” he said. “This financial scenario seems uncomfortably plausible given the state of today’s markets and technology.”
Hsu said firms should also be wary of the potential for AI to expand their liabilities as well as their efficiency, citing a case in Canada whereby a
AI systems are harder to hold accountable compared to company websites or staff, Hsu said, since AI’s complex and often opaque nature makes it hard to pinpoint responsibility and fix errors. That same principle is present when it comes to using AI to assist in credit underwriting; consumers denied by AI might question the fairness of such decisions, Hsu said.
“With AI, it is easier to disclaim responsibility for bad outcomes than with any other technology in recent memory,” he said. “Trust not only sits at the heart of banking, it is likely the limiting factor to AI adoption and use more generally.”
Credit: Source link