AI tools are becoming a core part of modern development workflows—but they come with serious risks most developers aren’t thinking about. In this episode, Matt and Mike break down five AI security threats that are already happening in the real world. From prompt injection attacks and rogue AI agents with access to your email, to runaway API bills and poisoned models slipping into your stack - these aren’t hypothetical problems. If you're using AI in production, in your codebase, or inside your company workflows, this episode will help you understand what can go wrong - and how to protect yourself before it does.
AI tools are becoming essential to modern development workflows, but with great power comes great security vulnerabilities. In this episode, we dive into five critical AI security risks that could seriously impact you, your users, and your company. From invisible attacks hiding in emails to poisoned models slipping into your codebase—these aren't theoretical concerns anymore. They're happening right now.
What it is: Attackers craft malicious inputs that override an AI's intended instructions—like SQL injection, but for language models.
Why it's dangerous: Even OpenAI admits they haven't solved this. The model can't reliably tell the difference between your instructions and an attacker's.
Mitigation: Restrict what LLMs can actually do—never let them execute code, write files, or call APIs without human approval in the loop.
What it is: When you grant AI agents access to Gmail, Slack, or other platforms, a single compromised prompt can instruct the agent to leak data, send emails, or take actions on your behalf.
Why it's dangerous: Researchers demonstrated zero-click exploits against ChatGPT, Microsoft Copilot, Salesforce Einstein, and Google Gemini—all currently in production.
Mitigation: Apply least privilege—only grant the minimum permissions needed, and audit what your agents actually have access to.
What it is: AI coding tools or CLI agents get stuck in loops, retry failed requests endlessly, or include way too much context—burning through thousands of dollars in API credits overnight.
Why it's dangerous: One developer reported waking up to a $2,400 bill after Cursor got stuck in a retry loop. AI agents don't know or care what things cost—they'll happily keep trying until your credit card maxes out.
Mitigation: Set hard spending limits and usage alerts in your API dashboard, and use tools that support token budgets or automatic stop conditions.
What it is: Employees using personal AI accounts for work, pasting proprietary code into ChatGPT, or spinning up unauthorized AI agents connected to internal systems.
Why it's dangerous: Almost half of enterprise AI usage happens through unmanaged personal accounts—you can't secure what you don't know exists.
Mitigation: Provide sanctioned AI tools that meet employee needs, and implement visibility into AI usage across your organization.
What it is: Attackers corrupt training data or hijack model namespaces on platforms like Hugging Face, injecting backdoors or malware into models before you ever download them.
Why it's dangerous: Poisoning just 0.01% of training data can significantly alter model behavior, and unlike traditional malware, there's no patch—you may need to retrain from scratch.
Mitigation: Pin specific model versions, verify provenance, and never use trust_remote_code=True on models you haven't audited.
Prices subject to change and are listed in USD
Learn to code using Scrimba with their interactive follow-along code editor.
Join their exclusive discord communities and network to find your first job!
Use our affiliate link for a 20% discount!!
We receive a monetary kickback if you use our affiliate link and make a purchase.