American News Group

Google & AIR Set Out Framework for AI Risk in Banking Sector

Google Cloud partners with financial regulatory body AIR to propose new guidelines for managing generative AI risks in financial institutions

Financial institutions need to revamp their model risk management frameworks to account for the emergence of generative artificial intelligence, according to a new paper from Google Cloud and the Alliance for Innovative Regulation (AIR).

The paper, released by Google Cloud, Alphabet Inc’s enterprise tech division, and AIR, a non-profit organisation focused on financial regulation modernisation, estimates that generative AI (Gen AI) could contribute £270 billion (US$350bn) annually to the banking sector.

Generative AI, a form of artificial intelligence that creates new content based on pattern recognition rather than traditional data analysis, requires specific governance frameworks to manage potential risks, the paper argues.

Model risk in the AI era

“Striking a balance between harnessing its potential and mitigating its risks will be crucial for the adoption of generative AI among financial institutions,” say Behnaz Kibria, Director of Government Affairs and Public Policy at Google Cloud and Jo Ann Barefoot, Co-Founder and CEO of AIR in a blog published on Google Cloud’s website.

The paper outlines how existing model risk management frameworks, which financial institutions use to assess and control risks in their decision-making tools, can be adapted for generative AI applications. These frameworks typically include validation processes, governance structures, and risk mitigation strategies.

Financial institutions are implementing Gen AI solutions across multiple business functions. These range from customer service automation to fraud detection systems and regulatory compliance tools. The technology differs from traditional AI systems in its ability to generate new content rather than simply analyse existing data.

Both writers emphasise that the technology sector and financial institutions must work together to ensure responsible implementation of these systems.

Regulatory clarity needed

The paper identifies three areas where regulatory guidance needs updating. These include documentation requirements for AI models, evaluation methods for AI systems, and implementation controls.

Model documentation refers to the detailed recording of how AI systems make decisions, including the data sources used and the decision-making processes involved. This documentation becomes crucial for audit trails and regulatory compliance.

Evaluation methods involve techniques such as ‘grounding,’ where AI outputs are verified against trusted sources. This process helps ensure the accuracy and reliability of AI-generated content and decisions.

“Regulators could anchor to industry best practices and standards that they consider strong – perhaps presumptive – evidence that the requirements of model risk management frameworks have been met,” Behnaz and Jo Ann say.

Implementation controls and oversight

The paper suggests that financial institutions should implement specific controls for AI systems, including monitoring protocols and human oversight. These measures aim to ensure AI systems remain within acceptable risk parameters.

Striking a balance between harnessing its potential and mitigating its risks will be crucial for the adoption of generative AI among financial institutions.

Behnaz Kibria, Director of Government Affairs and Public Policy, Google Cloud and Jo Ann Barefoot, Co-Founder and CEO of AIR

Continuous monitoring systems track AI performance and flag potential issues in real-time, while human oversight ensures decisions align with institutional policies and regulatory requirements.

Third-party management

The document also addresses the management of third-party AI providers, a crucial consideration as many financial institutions rely on external technology vendors for their AI capabilities.

The recommendations extend to shared responsibility models between financial institutions and their technology providers, outlining how risk management responsibilities should be divided. This includes clear delineation of roles in model validation, ongoing monitoring, and risk mitigation.

The paper proposes that regulators acknowledge established governance practices and provide enhanced regulatory clarity across four key areas: model governance, model development, model validation, and third-party risk management.

For financial institutions using third-party AI systems, the paper emphasises the importance of maintaining oversight while leveraging external expertise. This includes establishing clear lines of responsibility and maintaining appropriate levels of internal expertise to effectively manage these relationships.

As the report says: “Collaboration between industry participants, regulators and governmental bodies will be key.”

Exit mobile version