Risk and probability are based on data. You establish the statistical risk of losing an investment based on other investments that didn’t work out. You assess whether someone is likely to repay a loan based on their credit score and other reliability indicators. But what happens when you want to calculate risk for an incident that’s never happened? How do you handle uncertainty and maneuver quickly as things change, especially if the premise underlying your risk model is outdated?
To weather incoming storms, you need three things:
- A collaborative strategy that welcomes questions from across the business
- Robust risk models capable of withstanding shocks (machine learning for risk)
- Data pipelines that respond swiftly, finding new sources that reflect the reality on the ground
Asking questions: What’s wrong with your risk methodology?
Done right, data science helps you shift from subjective intuition to an evidence-based strategy. Sometimes, though, this goes a little too far. Machine learning models are one part of a broader decision-making strategy — one which also accounts for the role of domain expertise, for example. It’s important that risk managers feel comfortable challenging, examining and testing their findings, rather than accepting without question that what the model says must be correct.
Part of this means recognizing that machine learning models can always be improved. That they are only as strong as the data you feed into them. That they need to be updated when external circumstances change. That means you should be inviting questions from business colleagues whose thorough domain knowledge means they have well-founded concerns.
Ultimately, all models are based on scientific assumptions about prevailing conditions. You need to understand exactly what those assumptions are and be willing to challenge them. Otherwise, you could find that your sophisticated risk strategy is built on very shaky ground.
Withstanding shocks: machine learning for risk management
A key part of this is machine learning risk management (machine learning for risk). Risks don’t only come from outside. If you aren’t stress-testing your models properly, these models themselves may also pose unidentified risks.
Your models must be designed using scenarios that take into account an appropriate range of risks. Given that they have direct experience of what can go wrong, the people in your organization who actually have to take risks (for example, team members who decided whether to approve a loan) should be involved in the process of identifying risk factors, manifestations, and outcomes.
Make sure, too, that your models are designed to recognize new types and scopes of risks. Take securities trading: prior to the 2008 financial crash, most institutions used a daily Value-at-Risk (VaR) measure to assess securities trading risk. This works out how much money you could lose at the probability you have calculated, e.g. $20 million at the level of 2% means there’s a 2% risk of losing $20 million over the next day. If this is accurate, 2% of the time, you will lose more than $20 million. 98% of the time you will not.
There are several problems with this. Firstly, it doesn’t take into account the difference between a small overrun you can absorb (e.g. this year you lost $21 million 3% of the time), and a rare but catastrophic loss (e.g. you only exceeded your VaR once, but you lost $1 billion). Secondly, it assumes that a firm could sell off underperforming assets immediately, limiting the damage to a few days. But what if you can’t sell off your portfolio? What if no one will take it off your hands? Longevity completely changes the scale of risk.
This is linked to another crucial element of robust machine learning models: stress-testing. Typically, you do this using data from scenarios that have happened before, such as changing demand in a particular sector, spikes and crashes in oil prices, or a drop in interest rates.
However, machine learning for risk models offer an excellent way to test entirely new scenarios that haven’t happened yet. By looking at hypothetical interdependencies between different risk factors, you could test, for example, what would happen if a disaster event put trading on hold for a few days, a global pandemic interrupted your supply networks, or new regulations changed a key part of your operations.
Getting the right data: the changing face of risk in 2020
A common problem for risk models is the overreliance on internal data. Yes, historical data is crucial — you certainly don’t want to rely on forward-looking data too much, as this can skew your results. But it’s important to appreciate that the data you have in-house is only one small part of the picture. Unless you’re keeping an eye on broader emerging patterns, you will be too fixated on the smaller picture and far too slow to respond to threats happening right now. Incorporating data from outside your organization helps you to get a more complete, balanced view.
For example, incidents of fraud increase sharply during times of crisis. However, over-correcting by erring on the side of caution, such as by lowering the threshold to block more and more financial transactions, can backfire by alienating valuable customers. Rather than rejecting or delaying more and more activity, you need to implement better anomaly detection and more accurate models that pinpoint and selectively block high-risk attempts.
For this, you need robust, effective machine learning models that require complete, up-to-date, high-quality data. In such a fast-changing risk environment, you will likely find it difficult to access enough suitable, relevant historical data from inside your organization. Credit scores and other reliability indicators that were true for a customer in 2019 may give an entirely inaccurate picture of their situation now. To keep track of the true level of risk, you need to incorporate up-to-the-minute external data sources.
That may include stock market indices, government data, text analysis of online conversations, alternative financial data, economic and employment data, and perhaps even macro health or epidemiological trends that could predict changes to government policy in the coming weeks or months. Taken together, these datasets can contextualize and enrich one another, allowing you to get a complete picture of the risk profile right now.
Final thoughts: preparing for what you can’t know
Just because you don’t know exactly what will trigger a market shock, doesn’t mean you can’t test how the different types of market shocks would affect you. It’s important that you understand not only the known risks but how you will respond to a zero-day threat.
That means looking outside your organization for valuable datasets. It means looking inside your organization for domain knowledge. Listen to your business colleagues when they raise concerns and use these to stress-test your model. Work on your ability to explain results and models to non-technical colleagues who may not be able to interpret the nuance unaided. The key is to encourage a culture of questioning, transparency, and accountability across the organization. The alternative is complacency… and that way lies disaster.