top of page

AI & FINRA: Meeting Compliance Expectations in 2025

  • Stagg Wabnik
  • Jun 13
  • 3 min read
Man in suit typing on laptop showing "AI FINRA" on screen. Office setting with papers and a blue mug on a wooden desk.

Navigating AI Adoption Under FINRA Oversight

Artificial intelligence is transforming the way financial firms communicate, analyze data, and engage with clients. But as the tools evolve, so do regulatory expectations. FINRA’s message is clear: innovation is welcome, but compliance is non-negotiable.


Firms looking to incorporate AI into marketing, supervision, or client services need to align these tools with core regulatory values—fair dealing, investor protection, and transparency. The goal isn’t to slow down progress but to implement AI responsibly, with oversight built in from the beginning.


The Impact of Regulatory Notice 24-09

In June 2024, FINRA released Regulatory Notice 24-09 to address the growing use of AI in broker-dealer communications. The notice confirmed that FINRA Rule 2210 governs all public-facing communications, no matter how they’re created—whether by a person, a third-party vendor, or an AI model.


What Firms Need to Know:

  • AI-generated content must meet Rule 2210 standards for accuracy, fairness, and proper disclosures

  • Rule 3110 still applies—firms remain responsible for supervision and approval of content

  • Using AI does not reduce or transfer compliance obligations

In short, relying on AI doesn’t shield a firm from scrutiny. Responsibility still lies with the firm.


Where the Risks Show Up

Without proper oversight, AI can create compliance blind spots:

  • Inaccurate Output: AI may generate outdated, misleading, or overly confident responses

  • Missing Disclosures: Automated content might overlook required disclaimers or risk language

  • Gaps in Recordkeeping: AI-generated material still needs to be stored per SEC Rule 17a-4 and FINRA Rule 4511

  • False Sense of Security: Overreliance on AI can lead firms to skip necessary human checks


Building a Compliant AI Framework

Avoiding these pitfalls starts with a solid plan. Here’s a practical approach:


1. Assign Human Supervision

Someone with supervisory authority under Rule 3110 should review AI-generated communications to ensure compliance. Treat AI as a tool, not a substitute for human oversight and judgment.


2. Create Internal Usage Guidelines

Document how and when AI can be used. Train staff on its limits and establish checkpoints that require human review.


3. Archive All AI-Generated Material

Utilize your existing record-keeping system to store AI content following FINRA and SEC retention rules.


4. Schedule Regular Spot Checks

Review a sample of AI content regularly. Look for accuracy, tone, and compliance with firm policies.


5. Loop In Legal and Compliance Early

AI decisions shouldn’t happen in silos. Ensure that your legal and compliance teams are involved in vetting and approving the tools.


Don’t Overlook Cybersecurity and Vendor Risk

Most AI systems are hosted in the cloud or accessed through third-party platforms, making vendor management a critical aspect. If you're bringing AI into your workflow:

  • Verify how client data is encrypted and stored

  • Avoid using sensitive or non-public data for AI training

  • Review contracts for audit rights, liability clauses, and data protections


Even though FINRA’s cybersecurity guidance doesn’t name AI specifically, the principles still apply. Any tool that touches sensitive client data must meet the same high standards as other regulated platforms.


Contact Stagg Wabnik Law Group

AI is here to stay, but so are FINRA’s compliance expectations. With the right structure in place, firms can gain efficiency without risking regulatory trouble.


Stagg Wabnik Law Group advises broker-dealers, investment firms, and financial institutions on integrating technology while meeting compliance obligations. To discuss how your team can safely deploy AI, call (516) 812-4550 or visit the contact page.

Comentários


bottom of page