Artificial Intelligence in Finance (Risks)
(528 words)
Glossary
| English | Korean |
1 | GenAI | 생성형 인공지능((학습한 데이터를 기반으로 텍스트나 이미지를 생성할 수 있는 인공지능)) |
AI may offer significant benefits to financial institutions in reducing costs and generating revenue. But as financial institutions explore new ways to benefit from AI, policymakers and financial institutions must consider the potential broader risks.
A key risk for financial institutions using AI tools is model risk. Model risk refers to the consequences of poor design or misuse of models. Addressing model risk includes managing data quality, design, and governance. The data, design and governance of models are critical components of effective and safe development of AI, and its use. For example, it is important to consider where limitations in data can skew a model’s outputs. Models trained on historical data will, by definition, be informed only by the historical examples of stress or outlier events contained in the underlying data. While these types of events stand out in our memories, they are relatively few and unlikely to be repeated in the same ways. This limitation means that some models that could be used for trading may be less robust or predictive in future periods of stress.
It is also critical to consider how the model is being used. Even if a model is well designed, it can present risks if used or interpreted inappropriately. As firms become more comfortable with AI models and outputs, it may become easy to forget to question the models’ underlying assumptions or to conduct independent analysis. We have seen these kinds of dependencies in the past. For example, prior to the financial crisis, banks and market participants relied on credit rating agencies to an extent that reduced their capacity for independent assessments. Newer AI tools may create or exacerbate some of these existing challenges for governance and oversight. These tools can be less clear in their reasoning, more dynamic, and more automatic. For example, the speed and independence of some AI tools exacerbates the problem of overreliance as the opportunity for human intervention may be very short. This is particularly true for applications like trading strategies because of the speed required.
Relatedly, use of AI tools may increase reliance on vendors and critical service providers. While the use of third parties can offer financial institutions significant benefits, these dependencies can introduce some risks. For example, AI tools require significant computing power and may increase reliance on a relatively small number of cloud service providers. There is likely less visibility into the AI tools developed by vendors than those developed in house.
Operational risks related to AI may also come from outside the financial institution. These include AI-enabled cyber-attacks, fraud, and deep fakes. Widely available GenAI tools are already expanding the pool of adversaries and enabling all adversaries to become more proficient. While the tactics are often not new – like phishing – they have become more effective and efficient in the last year. For example, in a reported incident earlier this year, an employee of a multinational financial institution was tricked into transferring $25 million after attending a video conference call with an AI deepfake of the firm’s chief financial officer.
We should also consider whether AI use by financial firms could present financial stability risks – that is, risks to the broader financial system.