If you missed the fascinating “Banking, Risk Management, and AI” panel at the Ai4 virtual conference, read on for a quick summary of what was discussed.
Understanding and managing risk is crucial for banks and financial institutions. As the financial services industry becomes more data driven, artificial intelligence is playing a powerful role in the way banks comprehend and handle their risk. In this panel, industry leaders in finance provided insight into the latest thought leadership on AI technologies for the different types of risk relating to finance such as AI risk, credit risk, operational risk, market risk, and liquidity risk.
Natalia Bailey - Policy Advisor, Digital Finance at the Institute of International Finance
Agus Sudjianto - EVP, Head of Corporate Model Risk, Wells Fargo
Jacob Kosoff - Head of Model Risk Management and Validation, Regions Bank
Ashis Jain - Chief Product Officer, Arkose Labs
Amit Srivastav - Executive Director, Morgan Stanley
Victor Ghadban - Chief Field Data Scientist, Explorium
From the model perspective of AI and risk management, things changed drastically. Since the crisis, boards of directors and senior executives developed a skepticism around analytics and predictive models after seeing a variety of their underwriting, capital markets, and forecasting models fail. Senior leaders know that models are not always accurate and are generally skeptical of their predictive power. After the financial crisis, leaders have generally become more concerned about the weaknesses, limitations, risks, and pitfalls of predictive models and analytics. Even though we’ve seen a lot of lift and growth in models and machine learning algorithms since 2009, there are still mistakes and errors. New risks are constantly arising as the industry evolves. Since the crisis, leadership teams have started to develop a better understanding of the benefits and the risks or predictive models. Model risk management today is being driven by leadership teams with healthy risk management strategies. There has been a heavy emphasis on building risk management frameworks.
Two major trends have emerged since 2009 with regards to predictive models and risk:
Around 2009 is when big data became more prevalent to use in data analytics. Prior to this most institutions were only focusing on the data they had on hand which meant the models weren’t predicting as accurately as possible. In the past ten years, compute power and data storage have come a long way. Post 2009 is when companies started to build more accurate predictive models with the increased amount of data available. The models seemed fine until COVID-19 hit, and the models started to fail again. It is hard to predict risk because events like financial crises, and pandemics aren’t “normal” occurrences; they are changes to “normal” trends. Now there is data from COVID-19 that we can start adding to predictive models. Risk is constantly evolving, as unprecedented events continue to occur.
Growing better datasets by accessing data not only internally, but externally as well, is the key to building better predictive risk models. The amount of data is not the only thing that matters, the quality of the data and its sources are super important. Finding the right data in order to train accurate risk models is emerging as a major challenge.
AI models typically make predictions on past events. The past isn't always the best information to base future predictions on. A use case example for this is lending risk. Prior to COVID-19, it was easier to simply look at a FICO score or credit bureau information and make a decision on whether or not to extend credit. After lockdowns, and uncertainty around employment, income, family life, and when things would go “back to normal”, a lot of people and business behaviors have changed. Now, simply having “good credit” doesn’t mean as much given all of the changes that the pandemic brought on (and continues to bring on). Data acquisition for risk models will have to evolve as they cannot depend on historical data to make accurate predictions. The challenge is ensuring that AI models are trained on the right data. Data quality is a key aspect of AI ecosystems; without the right data, the models cannot accurately predict risk.
Other common challenges when it comes to AI are explainability, data privacy, and bias. It is common to see organizations putting too much trust into “black box” models without understanding how or why models were built, and the data and algorithms that were used to build them. Without having a deeper understanding of a model and the decision-making it drives, it’s hard to catch if the model is making accurate predictions or not. AI and machine learning are gaining popularity without necessarily gaining the same momentum in terms of how deeply they are understood. Users need to be aware of AI-related risks.
Ultimately, machine learning and AI development are in early stages of development, and will take some time to mature.
When it comes to using AI solutions for risk, the panelist agreed that there is a need for caution, due to all of the challenges listed above. That is not to say there aren't huge opportunities for the applications of AI in risk management.
Even though AI is still in early stages of development, it has come a long way. A great example is the use of AI in anomaly detection, and how far it has come in just the past five years. Currently there is a lot of growth, traction, and value with AI and ML in cyber security.
While some believe that the purpose of AI is to replace people, in actuality it is used to make decisions quicker. It’s about having the best data available to be able to find anomalies faster than your competitors can. This is true for all types of risk including credit risk, money laundering, and fraud. AI applications are already helping companies save money, and avoid making bad decisions. AI can monitor and make decisions in real-time. It is also making feature engineering for risk models much more sophisticated.
Data privacy is arising as an important issue; more data about us is being captured and stored than ever before. In the United States, many states have been putting in place new regulations around data privacy. There is a lot of concern around data collection, use, and management, especially when it comes to PII (personally identifiable information). Regulators' emphasis on increasing data privacy will affect the flexibility of risk models. This needs to be monitored; models will need to be consistently monitored, changed, and tweaked. Another major concern is data bias, and models that are in production don’t always necessarily reflect reality, which can have very negative repercussions on society.
Regulatory bodies tend to be concerned around what AI can actually do. The issue is that AI is still evolving, which creates grey areas. This not only affects those that are implementing it, but also those that are trying to regulate it. There needs to be a continuous review of AI systems.
You can learn more by watching the entire session recording here.