AI's Role in the Next Financial Crisis: A Warning from SEC Chair Gary Gensler

Stu Sjouwerman | Aug 14, 2023

d89d4115-b58d-433d-af12-ec4e513fbdc6TL;DR - The future of finance is intertwined with artificial intelligence (AI), and according to SEC Chair Gary Gensler, it's not all positive. In fact, Gensler warns in a 2020 paper —when he was still at MIT—that AI could be at the heart of the next financial crisis, and regulators might be powerless to prevent it.

AI's Black Box Dilemma: AI-powered "black box" trading algorithms are a significant concern. Imagine several traders using similar algorithms, all deciding to sell at the same time. It's like a stampede at a market, causing a crash. This risk is amplified by the "apprentice effect," where people trained together tend to think alike.

Regulatory Challenges: Regulating AI is like trying to catch smoke with your bare hands. If regulators try to control AI, they might inadvertently create a situation where all AI models act the same, increasing the risk of a synchronized failure. Gensler's words ring clear: "If deep learning predictions were explainable, they wouldn't be used in the first place."

Discrimination and Unpredictability: AIs are like mysterious judges, assessing creditworthiness and other financial decisions. But their opacity makes it hard to tell if they're acting in a discriminatory manner. An AI that was fair yesterday might become biased today, and there's no way to predict or prevent that.

Systemic Risks and Regulatory Gaps: Deep learning in finance is like a growing storm, likely to increase systemic risks. Regulators might try to slow it down by increasing capital requirements or implementing "sniff tests" from more explainable models, but Gensler admits these measures are "insufficient to the task."

The Data Conundrum: AI's hunger for data is like an unquenchable thirst. Models built on the same datasets may act in lockstep, leading to crowding and herding. This convergence can create monopolies and "single points of failure" that threaten the entire network. Think of Lehman Brothers' failure, but on a data-driven scale.

Incomplete and Dangerous Data: Even the largest datasets are like incomplete puzzles, lacking enough historical information to cover a full financial cycle. This gap can lead to devastating consequences, as seen during the financial crisis.

Global Risks: Developing economies might end up using AIs trained on foreign data, like trying to navigate a local market with a map of a different city. The risks here are even larger.

The Bottom Line: AI's unknowns are its most dangerous aspect. The intertwining of AI and finance is a complex dance, and as Gensler warns, one misstep could lead to a crisis.

Is Your Organization Vulnerable to Quishing?

Traditional filters often miss malicious links hidden in QR codes. Launch our Free Quishing Test for up to 100 users to identify security gaps and receive your custom Phish-prone Percentage report within 24 hours.

Get Your Free Quishing Test

Secure the Digital Workforce: Human + AI

KnowBe4 empowers the modern workforce to make smarter security decisions every day. Trusted by more than 70,000 organizations worldwide, KnowBe4 is the pioneer of digital workforce security, securing both AI agents and humans. The KnowBe4 Platform provides attack simulation and training, collaboration security, and agent security powered by AIDA (Artificial Intelligence Defense Agents) and a proprietary Risk Score. The platform leverages 15 years of behavioral data to combat advanced threats including social engineering, prompt injection, and shadow AI. By securing humans and agents, KnowBe4 leads the industry in workforce trust and defense.