20 November 2025 09:30 - 10:00
Break your agent before someone else does: A builder's guide to AI security
You've built an AI agent. It works beautifully in testing. But have you tried to break it?
The barrier to building AI agents has never been lower. LangChain, n8n, OpenAI's agent builder, MCP tools, or even ChatGPT-generated Python code can get you from idea to deployment in hours. But this accessibility comes with a hidden cost: most AI builders are shipping agents with critical security vulnerabilities they don't even know exist.
In this hands-on session, we'll attack a live AI agent architecture together, the same setup many developers are using in production. You'll see real exploits against LangChain-based agents, MCP tools, and common integration patterns, and learn exactly how attackers think about your code.