On March 6, I had the opportunity to speak and provide testimony at the SEC Investor Advisory Committee’s panel on Retail Investor Fraud in America—a critical discussion about how AI is supercharging financial deception. March 6 also happened to be National Slam the Scam Day—an ironic but fitting coincidence.
With AI rapidly democratizing deepfakes, synthetic voices, and hyper-personalized scams, it’s never been easier for bad actors to exploit trust, emotions, and financial systems at scale. My goal during this session was to cut through the noise, demonstrate the risks in real time, and offer actionable insights.
This important conversation brought together experts from various disciplines to provide a comprehensive view of the AI-enabled fraud landscape and potential mitigating strategies. Together, we created a powerful collective voice on the urgent need for action.
Figure 1: Me standing in front of the SEC building in DC.
About the Panel
Our panel, moderated by Andrea Seidt (Ohio Securities Commissioner and Chair of the SEC's Investor as Owner Subcommittee), brought together experts with multifaceted backgrounds in understanding and addressing financial fraud. Each speaker contributed valuable insights that collectively enhanced our understanding of the evolving threat landscape and offered thoughtful approaches to building resilience against these persistent challenges.
In addition to myself, the panel was comprised of:
- Erin West, Founder of Operation Shamrock and former Deputy District Attorney for Santa Clara County, opened the session with a powerful overview of "pig butchering" scams—sophisticated fraud schemes that have become increasingly prevalent. As a nationally recognized leader in combating cryptocurrency-related fraud, West has pioneered innovative approaches to fighting these transnational criminal syndicates. Her Operation Shamrock initiative follows a three-pronged strategy ("educate, seize, disrupt") to protect victims and recover stolen funds.
- Dr. David Maimon, Head of Fraud Insights at SentiLink and Professor at Georgia State University's Department of Criminal Justice and Criminology, shared critical insights into how fraudsters operate in online ecosystems. His groundbreaking research through the Evidence-Based Cybersecurity Research Group has uncovered alarming trends in how criminals share information and tools on Telegram channels and the dark web. Dr. Maimon's work has exposed sophisticated fraud operations, including the use of deepfake videos to circumvent security measures and the creation of tools like "FraudGPT" specifically designed for criminal activities.
- Following my presentation on AI-driven deception technologies, Claire McHenry, Deputy Director of the Nebraska Department of Banking and Finance Securities Bureau, provided crucial data-driven insights on fraud trends affecting retail investors across America. Drawing from the NASAA 2024 Enforcement Report, she highlighted the alarming 30% increase in investigations involving internet, social media, and digital assets fraud. McHenry's presentation emphasized how state securities regulators are witnessing cryptocurrency scams, AI-washing, and pig butchering schemes particularly targeting older Americans. As both immediate past-president of NASAA and co-chair of its Seniors Committee, she advocated for a comprehensive whole-of-government approach and stronger public-private partnerships to combat these evolving threats.
Together, our panel provided a broad view of the current threat landscape, from the technical mechanisms of AI-enabled fraud to the regulatory frameworks needed to combat these evolving threats. This multi-faceted approach—combining technical understanding, law enforcement expertise, industry knowledge, and regulatory insight—represents the kind of collaborative effort needed to address the exploitation zone I described in my testimony. Let me share what I presented about the alarming new era of AI-driven deception.
The New Face of Fraud: What I Told the SEC
I opened my remarks with a stark reality: we are entering a new era of deception—one where fraudsters don’t need Hollywood-level budgets or sophisticated hacking skills. AI has erased the skill barrier. Then, I introduced a key concept: The Exploitation Zone.
Figure 2: Screenshot from the video of my testimony.
This is the ever-widening gap between technological advancements and our collective ability to detect and defend against them. The faster AI evolves, the bigger this gap grows—and fraudsters thrive in that gap.
Upon rewatching the presentation, there was a line I said that seems to sum up the issue well: “A population who is unaware of what's possible is a population who is at its very core vulnerable to deception.”
Live Demonstrations: AI Deception in Action
Instead of just talking about AI scams, I showed them in action.
- I demonstrated a fully-automated GenAI powered voice scambot. This is the same bot I created and used at last year’s Social Engineering CTF at DEFCON
- I demonstrated how AI-generated voices can mimic trusted individuals, making scams more convincing than ever
- I showed how easy it is to create synthetic endorsements that look like real financial experts or celebrities backing a fraudulent scheme
- I showed just how fast and easy it is to use realtime deepfake video software to become anyone you want
The truth was clear: this isn’t a future problem—it’s happening now.
What Needs to Happen Next
The SEC has already taken important steps in this space, including:
- The formation of the Cyber and Emerging Technologies Unit (CETU) to combat AI-driven financial fraud.
- Increased scrutiny on AI-washing, where companies exaggerate AI’s capabilities in financial disclosures.
- Proposed rules addressing AI-driven conflicts of interest in financial decision-making.
But we must go further. In my testimony, I outlined three key areas for action:
-
Expand AI Fraud Task Forces – The SEC should dedicate specific resources within CETU to tackle AI-driven financial scams head-on.
-
Strengthen AI Transparency & Authentication – Encourage watermarking, digital signatures, and provenance tracking to make AI-generated content easier to verify.
-
Mandate AI-Specific Fraud Detection – Financial institutions must adopt deepfake detection, behavioral anomaly monitoring, and multi-factor verification for high-risk transactions.
Final Thoughts: The Exploitation Zone Will Always Exist—But We Can Manage It
One thing is certain: we’re not going to stop AI-powered scams entirely. But that doesn’t mean we’re powerless.
- Education is key—individuals, businesses, and regulators must stay ahead of AI-driven deception
- Technical safeguards matter—AI-generated content must be traceable and verifiable
- Policy must adapt—the SEC and financial institutions must act before AI fraud becomes an uncontrollable crisis
AI-driven scams are a trust crisis, not just a technology crisis. And on National Slam the Scam Day, I couldn’t think of a more important conversation to have. You can check out the session video here. My section starts at the 1 hour and 17 min mark. But the other permeations were also great, so you should watch the whole video!