The role of model risk management (MRM) in surveillance functions has evolved significantly over the last five years. Where previously there was inconsistency, with some firms oversight, the industry has been pushed towards applying broader MRM principles across surveillance tools.
Institutions such as the Federal Reserve, the Office of the Comptroller of the Currency (OCC), the Bank of England, and the Financial Conduct Authority (FCA) in its Market Watch 79, have been central in driving this change. Surprisingly, however, there have been no enforcement actions specifically tied to poor model governance in surveillance, although some participants speculated that capital requirements could eventually be influenced by these practices.
The change has only been possible because of education. MRM professionals are typically quant experts in market, credit, and liquidity risk models – areas where data completeness, reproducibility, and statistical rigour are critical. By contrast, surveillance relies on risk-based monitoring, partial datasets, and evolving technologies such as artificial intelligence, or AI. As one participant put it, “Those qualified to validate credit or market risk models are not necessarily qualified to do the same with lexicons or AI-driven trade surveillance.” Bridging this skillsgap has been essential.
The EU AI Act has also provided a helpful framework for assessing the risk of surveillance models. Firms are beginning to use this structure to monitor employee communications in the workplace, offering a potential blueprint for proportionality and risk-tiered governance.