Session
Insecure Vibes: The Risks of AI-Assisted Coding
AI coding assistants like GitHub Copilot and ChatGPT are changing how developers write and ship software, faster than security teams can keep up. But speed comes at a cost: “vibe coding” encourages developers to trust confident-looking code that may be dangerously insecure.
In this talk, we’ll look at real-world examples and research showing how AI tools replicate and amplify insecure patterns, why traditional AppSec controls often fail to catch these issues in time, and how teams can adapt. We’ll explore modern strategies to make AI-assisted coding safer without making it slow (secure RAG references, MCP enforcement layers in the IDE, guardrails, policy integration, and developer education).
Whether you’re on the AppSec side or writing code, this session will equip you with a clearer threat model and practical tools to secure your AI-augmented SDLC.
                                        Outline:
	1.	Intro: Welcome to the Era of Vibe Coding
	◦	What is vibe coding? Where did it come from?
	◦	How AI tools (Copilot, ChatGPT, Tabnine) have changed developer behavior
	◦	Recorded Demo: insecure function suggested by AI, generally would be accepted without question
	2.	Why AppSec is Struggling to Keep Up
	◦	AI writes fast. Code review is slow. 
	◦	Dev spend more time with AI than docs, and way more than the security team 
	◦	Dev trust AI too much (report form stack overflow): 43% of developers trust the accuracy of AI tools: https://stackoverflow.co/company/press/archive/stack-overflow-2024-developer-survey-gap-between-ai-use-trust 
	◦	How “fast shipping” incentivizes insecurity
	3.	What LLMs Actually Learn—and Why That’s a Problem
	◦	Training data: open-source, Stack Overflow, insecure examples
	◦	Numerous articles and studies prove this is problematic
	◦	https://www.cs.umd.edu/~akgul/papers/so.pdf
	◦	AI doesn’t understand security context—just patterns
	◦	Summarization of case study: repeated insecure code patterns suggested by multiple tools
	⁃	Title: Do Users Write More Insecure Code with AI Assistants?
	⁃	Authors: Neil Perry, Megha Srivastava, Deepak Kumar, et al.
	◦	
	4.	Real-World Threats Introduced by AI Coding
	◦	more than half of organizations said they encountered security issues with poor AI-generated code “sometimes” or “frequently,” as per a survey by Snyk: https://go.snyk.io/2023-ai-code-security-report-dwn-typ.html 
	◦	Standford study found people who used AI to write code “wrote significantly less secure code” but were “more likely to believe they wrote secure code.” https://arxiv.org/pdf/2211.03622
	◦	More if time permits
	◦	Examples from the case study “Do Users Write More Insecure Code with AI Assistants?”:
	◦	Insecure File Upload
Suggested by Al:
file = request.files['file']
file.save('/uploads/' + file.filename)
Risk: No sanitization of filename → Path traversal or RCE possible.
	◦	Hardcoded API Key
Suggested by Al:
api_key = 'sk_test_51L...'
response = requests.get(url, headers= {'Authorization': api_key})
Risk: Credentials exposed in source control or logs.
	◦	No HTTPS Enforcement in Redirect
Suggested by Al:
return redirect('http://' + user_input_url)
Risk: Downgrade attack or open redirect vulnerability.
	◦	The “it works, let’s ship it” mindset
	5.	What We Can Do About It
	◦	Secure coding and privacy guardrails for AI-assisted devs
	◦	RAG servers with secure coding examples to reference first, above what it learned previously
	◦	Prompts that apply your secure coding policy or standard to code generated by the AI.
	◦	MPC servers to call SAST/DAST/Secret/IaC/SCA/etc tools from the IDE. It can also be the final application of your secure coding policy.
	◦	Training developers to critically evaluate AI code
	◦	Use AI to fight AI: anomaly detection, review assistance, mini ‘just in time’ lessons on secure coding
	◦	All the regular AppSec activities: threat modelling, security requirements, a secure SDLC, secure coding training, etc. 
	7.	Call to Action: Using AI for Security
	◦	Adjust your SDLC to include checks for AI related issues (threat modelling, tooling, policies, etc.)
	◦	Train your developers so they can evaluate code properly and use the AI securely
	◦	Provide them SAFE AI options to use
	◦	Switch to AI-aware AppSec tooling
	◦	Conclusion & summary
	8.	Resources, where to learn more 
	◦	PDF summary of talk including sources
	◦	#CyberMentoringMonday - find a professional mentor online
	◦	my personal blog and socials
Sources:
https://arxiv.org/html/2310.02059v2
https://www.techtarget.com/searchsecurity/news/366571117/GitHub-Copilot-replicating-vulnerabilities-insecure-code
https://www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agents
https://www.tabnine.com/blog/top-11-chatgpt-security-risks-and-how-to-use-it-securely-in-your-organization/ (which obviously has bias, since it’s from Tabnine, but still)
And others, there’s a zillion articles on this
                                    
                                
                            Tanya Janca
Secure Coding Trainer at She Hacks Purple
Victoria, Canada
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top