Speaker

Tanya Janca

Tanya Janca

Secure Coding Trainer at She Hacks Purple

Victoria, Canada

Actions

Tanya Janca, aka SheHacksPurple, is the best-selling author of 'Alice and Bob Learn Secure Coding’ and 'Alice and Bob Learn Application Security’. She is currently the CEO and secure coding trainer at She Hacks Purple Consulting. Over her 28-year IT career she has won countless awards (including OWASP Lifetime Distinguished Member and Hacker of the Year), spoken all over the planet, and is a prolific blogger. Tanya has trained thousands of software developers and IT security professionals, via her online academies (We Hack Purple and Semgrep Academy), and her live training programs. Having performed counter-terrorism, led security for the 42nd Canadian general election, developed or secured countless applications, Tanya Janca is widely considered an international authority on the security of software.

Advisor: Smithy, Katilyst

Board Member: Forte Group
Founder: DevSecStation, We Hack Purple, OWASP DevSlop, #CyberMentoringMonday, WoSEC
Contributor: OWASP Top Ten, StackOverflow

Area of Expertise

  • Information & Communications Technology

Topics

  • devops
  • DevSecOps
  • AppSec
  • Application Security
  • Web Development
  • Software Development
  • incident response
  • secure coding
  • secure coding practices
  • Secure Coding & Cybersecurity
  • ai
  • Artificial Intelligence
  • artificial intelligence security
  • Vibe Coding

Top Twenty Five Security Tips for Node.js

Welcome to the world of Node.js, where the power of JavaScript meets the server-side of our applications. As developers, we likely love the speed, scalability, and flexibility that Node.js offers. But along with these advantages comes the responsibility to ensure the safety and reliability of the applications we build with it. This talk will provide over 25 tips for writing secure, rugged, and reliable Node.js code.

Using Artificial Intelligence, Safely

Artificial intelligence is increasingly prevalent in software development, and as a result its safe and responsible use has become critical. We will dive into risks, such as unchecked decision making, AI agency, lack of validation, broken or missing oversight, and sensitive data exposures. We will also provide constructive insights on leveraging AI for code development, vulnerability detection, threat modeling, design assistance, and more. Through real-life examples and practical advice, this session will help you develop with AI, safely.

Top Ten Tips for Python Security

In the realm of writing secure Python code, it's not only about functionality and performance; it's equally vital to shield your application and users from potential threats and vulnerabilities. Given Python's immense popularity, it becomes even more essential that we acquire the skills to build secure, dependable, and robust applications. Join me in this talk as we embark on a shared journey to master the art of secure Python coding. Together, let's empower ourselves to create a safer digital world.

Threat Modeling Developer Behaviour: The Psychology of Bad Code

Security teams threat model systems, but rarely do we threat model the developers building them. What if some of the most persistent AppSec problems aren’t purely technical—but behavioral?
This talk dives into the psychology of insecure code, using principles from behavioral economics to explain why developers take risky shortcuts, ignore secure practices, or ship code that “just vibes.” From copying insecure Stack Overflow snippets, to skipping documentation, to shipping untested features under tight deadlines—these aren’t personal failings. They’re predictable cognitive patterns influenced by incentives, stress, and how our brains are wired.
We’ll explore how well-known concepts such as present bias, automation bias, the bystander effect, and overconfidence play out in real-world development. Then we’ll shift from insight to action—offering behavioral nudges and design patterns you can apply in your SDLC, tools, and team culture to make secure behavior the default.
This talk blends psychology, security, and dev reality to reframe AppSec—not as a checklist, but as a human system.

The AppSec Poverty Line: Minimal Viable Security

Not every team has a security budget. Not every project has a dedicated AppSec engineer. But every product exposed to the internet needs some level of security to survive.

This talk explores what I call “The AppSec Poverty Line” also known as ‘Minimal Viable Security” — the minimum viable set of practices, tools, and cultural shifts that under-resourced dev teams can adopt to meaningfully improve application security. Whether you're a startup with no security hires, an independent dev, or part of a team that doesn’t have a security budget, this talk will help you prioritize what actually matters.

We’ll cover practical approaches to getting from zero to secure-ish, with a focus on:
• Training developers to write more secure code, and spot unsafe code
• Cultivating a security-positive culture
• Leveraging open-source tools that punch above their weight
• Knowing when “good enough” really is enough — and when it’s not

Secure Code Is Critical Infrastructure: Hacking Policy for the Public Good

What happens when a security professional tries to help a government fix its insecure software?
In this talk, I’ll share my story: from writing a secure coding policy and offering it to the Canadian government, lobbying elected officials, contacting agencies like CRA about their poor security practices—and being met with silence, deflection, or outright dismissal.
I didn’t stop there. I wrote public letters, went on podcasts, published on Risky Biz, even got interviewed by CBC. But the institutions in charge of protecting our data? Either silence or “No comment, because security."
This isn’t just a rant—it’s a roadmap. I’ll show you the secure coding guideline I created (free to reuse), explain why governments need public-facing AppSec policies, and outline how we can push for secure-by-default practices as citizens, hackers, and builders.
Because secure code isn’t just for dev teams—it’s for democracy, privacy, and public safety.
Let’s make it law. Let’s make it public.

Maturing Your Application Security Program

After working with over 400 companies on their application security programs the most common question I receive is “what’s next?”. They want to know how to mature their programs, and when they look at the maturity models available, they find them intimidating and so far beyond their current maturity level that they feel impossible. In this talk I will take you through 3 common AppSec program maturity levels I have encountered over the years, with practical and actionable next steps you could take immediately to improve your security posture.

DevSecOps Worst Practices

Quite often when we read best practices we are told ‘what’ to do, but not the ‘why’. When we are told to ensure there are no false positives in the pipeline, the reason seems obvious, but not every part of DevOps is that intuitive, and not all ‘best practices’ make sense on first blush. Let’s explore tried, tested, and failed methods, and then flip them on their head, so we know not only what to do to avoid them, but also why it is important to do so, with these DevSecOps WORST practices.

30 Tips for Secure JavaScript

In this talk, we will cover 30 tips for writing more secure JavaScript, emphasizing what to do, what NOT to do, and utilizing open-source tooling to enhance security. JavaScript is not only the most popular web programming language, but it also faces security threats like XSS and code injection, meaning we need to ensure our JavaScript is tough, rugged, and secure. We'll touch only upon items that are specific to JavaScript, as opposed to agnostic topics that apply to all languages, such as encryption or authentication. By the end, you'll gain insights into selecting the best framework, adopting secure coding practices, and leveraging tools for web application security, catering to both seasoned developers and beginners seeking practical guidance.

Insecure Vibes: The Risks of AI-Assisted Coding

AI coding assistants like GitHub Copilot and ChatGPT are changing how developers write and ship software, faster than security teams can keep up. But speed comes at a cost: “vibe coding” encourages developers to trust confident-looking code that may be dangerously insecure.

In this talk, we’ll look at real-world examples and research showing how AI tools replicate and amplify insecure patterns, why traditional AppSec controls often fail to catch these issues in time, and how teams can adapt. We’ll explore modern strategies to make AI-assisted coding safer without making it slow (secure RAG references, MCP enforcement layers in the IDE, guardrails, policy integration, and developer education).

Whether you’re on the AppSec side or writing code, this session will equip you with a clearer threat model and practical tools to secure your AI-augmented SDLC.

Outline:
1. Intro: Welcome to the Era of Vibe Coding
◦ What is vibe coding? Where did it come from?
◦ How AI tools (Copilot, ChatGPT, Tabnine) have changed developer behavior
◦ Recorded Demo: insecure function suggested by AI, generally would be accepted without question
2. Why AppSec is Struggling to Keep Up
◦ AI writes fast. Code review is slow.
◦ Dev spend more time with AI than docs, and way more than the security team
◦ Dev trust AI too much (report form stack overflow): 43% of developers trust the accuracy of AI tools: https://stackoverflow.co/company/press/archive/stack-overflow-2024-developer-survey-gap-between-ai-use-trust
◦ How “fast shipping” incentivizes insecurity
3. What LLMs Actually Learn—and Why That’s a Problem
◦ Training data: open-source, Stack Overflow, insecure examples
◦ Numerous articles and studies prove this is problematic
https://www.cs.umd.edu/~akgul/papers/so.pdf
◦ AI doesn’t understand security context—just patterns
◦ Summarization of case study: repeated insecure code patterns suggested by multiple tools
⁃ Title: Do Users Write More Insecure Code with AI Assistants?
⁃ Authors: Neil Perry, Megha Srivastava, Deepak Kumar, et al.

4. Real-World Threats Introduced by AI Coding
◦ more than half of organizations said they encountered security issues with poor AI-generated code “sometimes” or “frequently,” as per a survey by Snyk: https://go.snyk.io/2023-ai-code-security-report-dwn-typ.html
◦ Standford study found people who used AI to write code “wrote significantly less secure code” but were “more likely to believe they wrote secure code.” https://arxiv.org/pdf/2211.03622
◦ More if time permits
◦ Examples from the case study “Do Users Write More Insecure Code with AI Assistants?”:
◦ Insecure File Upload
Suggested by Al:
file = request.files['file']
file.save('/uploads/' + file.filename)
Risk: No sanitization of filename → Path traversal or RCE possible.
◦ Hardcoded API Key
Suggested by Al:
api_key = 'sk_test_51L...'
response = requests.get(url, headers= {'Authorization': api_key})
Risk: Credentials exposed in source control or logs.
◦ No HTTPS Enforcement in Redirect
Suggested by Al:
return redirect('http://' + user_input_url)
Risk: Downgrade attack or open redirect vulnerability.
◦ The “it works, let’s ship it” mindset
5. What We Can Do About It
◦ Secure coding and privacy guardrails for AI-assisted devs
◦ RAG servers with secure coding examples to reference first, above what it learned previously
◦ Prompts that apply your secure coding policy or standard to code generated by the AI.
◦ MPC servers to call SAST/DAST/Secret/IaC/SCA/etc tools from the IDE. It can also be the final application of your secure coding policy.
◦ Training developers to critically evaluate AI code
◦ Use AI to fight AI: anomaly detection, review assistance, mini ‘just in time’ lessons on secure coding
◦ All the regular AppSec activities: threat modelling, security requirements, a secure SDLC, secure coding training, etc.
7. Call to Action: Using AI for Security
◦ Adjust your SDLC to include checks for AI related issues (threat modelling, tooling, policies, etc.)
◦ Train your developers so they can evaluate code properly and use the AI securely
◦ Provide them SAFE AI options to use
◦ Switch to AI-aware AppSec tooling
◦ Conclusion & summary
8. Resources, where to learn more
◦ PDF summary of talk including sources
◦ #CyberMentoringMonday - find a professional mentor online
◦ my personal blog and socials

Sources:
https://arxiv.org/html/2310.02059v2
https://www.techtarget.com/searchsecurity/news/366571117/GitHub-Copilot-replicating-vulnerabilities-insecure-code
https://www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agents
https://www.tabnine.com/blog/top-11-chatgpt-security-risks-and-how-to-use-it-securely-in-your-organization/ (which obviously has bias, since it’s from Tabnine, but still)
And others, there’s a zillion articles on this

Bsides Seattle 2024 Sessionize Event

April 2024 Redmond, Washington, United States

Tanya Janca

Secure Coding Trainer at She Hacks Purple

Victoria, Canada

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top