© Mapbox, © OpenStreetMap

Speaker

Ben Dechrai

Ben Dechrai

Disaster Postponement Officer

Kansas City, Missouri, United States

Actions

Ben Dechrai is a technologist with a strong focus on security and privacy, recognised as an MVP for his exceptional contributions to the community. Known for his ability to distil complex technical concepts into engaging, digestible portions, Ben empowers developers through a deep understanding of design principles, security considerations, and coding practices. With over two decades of experience in software engineering, security, and architecture, Ben is a published author and has consulted for companies and investors across numerous industries. He is deeply involved in the tech community, running technology conferences and workshops to share his expertise.

Badges

Area of Expertise

  • Information & Communications Technology

Topics

  • IoT
  • Security and IoT
  • Security
  • Application Security
  • api security
  • Identity
  • Identity Management
  • Identity and Access Management
  • Web API
  • Web APIs
  • Web
  • Web Apps
  • Web Frontend
  • web security
  • Web & Mobile
  • Web Development
  • Web Applications
  • Web Application Development
  • Web Application Security
  • Modern Web
  • Modern Web and UX
  • progressive web apps
  • JavaScript
  • JavaScriptCore
  • JavaScript & TypeScript
  • Modern JavaScript Frameworks
  • TypeScript
  • Node
  • NodeJS
  • Expert in Node.js
  • IT Security
  • Cloud Security
  • Data Security
  • cyber security
  • Cloud App Security
  • AI and Cybersecurity
  • cybersecurity awareness
  • Developer
  • Developer Tools
  • Developers
  • Developer Advocacy
  • Developer Advocate
  • Developer Relations
  • developer marketing
  • Developer Experience
  • Developer Communities
  • Developer Productivity
  • Developer Technologies
  • Backend Developer
  • android developer
  • Using AI and LLMs

What Building an AI Product Actually Taught Me

This isn't a talk about AI-assisted coding. It's about what happens when LLM responses drive your application's behaviour; when the output gets parsed, validated, and executed by your business logic.

While building BraidFlow, I discovered that the industry's documented problems aren't solved by better models or bigger context windows. Context drift in multi-turn conversations isn't just a prompt engineering challenge, it's an architectural one. Structured output reliability doesn't need JSON mode; schema field ordering and validation-driven retry logic function as instruction. And cost optimisation isn't about getting the one-shot right with the best performing models; informed retry prompts with cheap models can work just as well.

I'll share the benchmarking data that surprised me, the architectural patterns I tried along the way, and the prompt engineering insights that actually worked. You'll see real code, real failures, and the decisions that finally worked.

No theory. No hand-waving. Just lessons from shipping features that had to work.

When LLMs Go Rogue: Securing Prompts and Ensuring Persona Fidelity

Even the most carefully crafted system prompts can “go rogue,” reverting to generic assistant mode or leaking hidden instructions, undermining security, consistency, and user trust. Drawing on hard-earned lessons from building a goal-oriented AI group chat platform, this session delivers:

* Multiple prompt-leakage and prompt-reversion examples showcasing real-world LLM failures
* Live demos of evaluation workflows, detecting and analysing rogue or unexpected responses in real time
* Practical security patterns for prompt engineering to mitigate leakage and fallback risks
* Techniques for adding nondeterministic evaluation tests into your deployment pipeline

This no-fluff, demo-driven talk equips engineers and security practitioners with battle-tested patterns to keep LLM-powered applications on-brand and secure. You’ll leave with open-source repos, threat-model templates, and actionable takeaways to implement immediately.

Building Identity into LLM Workflows with Verifiable Credentials

LLMs power everything from chatbots to autonomous agents, but their non-deterministic nature exposes you to spoofing, privilege escalation, and compliance pitfalls. In this session, we'll draw on the social engineering experiments I undertook while building conversational AI systems, and we'll see how attackers could bypass security guardrails. We'll explore:

* Real-world injection attacks and the vulnerabilities that make them possible
* Emerging identity patterns, from W3C Verifiable Credentials to on-chain verification
* Methods to protect against prompt manipulation and the often-overlooked elements in audit logs
* A roadmap to LLM-aware identity ecosystems, including policy-as-code enforcement and federated governance models

You'll discover practical approaches to securing LLM workflows today while preparing for tomorrow's decentralised identity architectures. Through demos and case studies, you'll leave with actionable patterns for building trust into AI systems, and insight into where the ecosystem is heading.

AI Killed Your Privacy Tools

Privacy-breaking pattern analysis isn't new - but AI has just made it accessible to everyone. Tools that were once complex, specialized, and limited to well-resourced actors are now available to anyone with access to large language models. Your carefully crafted privacy protections, designed to withstand traditional analysis, are about to face a wave of AI-powered pattern recognition that puts sophisticated privacy-breaking capabilities into everyone's hands.

Through live demonstrations, you'll watch as simple AI tools reconstruct user identities from "anonymous" chat data, rebuild social networks from encrypted messages, and expose organizational structures from metadata we thought was safe. What once required deep expertise in statistical analysis can now be done with a few prompts to an LLM.

We'll explore how traditional privacy approaches fail against these democratized threats, and examine modern defenses like differential privacy and federated learning. You'll leave understanding both the scale of this new challenge and practical steps to protect your systems. Whether you're building communication tools, handling sensitive data, or protecting user privacy, this talk will show you why yesterday's privacy tools won't survive in a world where sophisticated pattern analysis is available to all.

Key Takeaways:
- How traditional privacy and anonymization tools fail against basic AI analysis
- Understanding the new ways AI can reconstruct identities and relationships
- Practical architectures and techniques for building AI-resistant privacy systems

Building Rock-Solid Encrypted Applications

Building secure applications requires more than just adding encryption. Through live demos and real-world examples, we'll explore how to properly implement security features like end-to-end encryption, perfect forward secrecy, and secure device migration. You'll see how to protect both data and metadata, at rest and in transit, and learn about the common pitfalls that can compromise seemingly secure systems.

Using a chat application as our example, we'll walk through the evolution from basic encryption to a robust security system. We'll examine how real-world applications handle key management, protect against traffic analysis, and manage secure device enrollment. You'll learn the architectural patterns that make applications truly secure at scale.

Whether you're building a messenger, a document store, or any application that needs to protect user data, you'll leave with practical knowledge of how to implement encryption correctly and make informed security decisions in your own projects.

Ten Key Steps for Enhanced Web App Security

This talk provides developers with a strategic approach to bolstering web application security. This talk focuses on key areas including securing client-side code, ensuring data integrity, and protecting against web vulnerabilities.

Through practical advice and live demonstrations, attendees will learn how to implement effective security practices across their applications, from managing external data sources to safeguarding user interactions.

Join us, and elevate your frontend security game against the backdrop of today's cyber threats.

Fine Grained Authorisation with Relationship-Based Access Control

Who can tag me in a post? If I move this file to another folder, who now has access? If my owner breaks up with his friend, will I still get a bone?

Whether you're a human, or a dog, let's face it, authorisation is hard. Role-based access control is a great starting point but hard to scale. Attribute-based access control scales better, but neither are much good at answering more complex conditions, like whether friends-of-friends can read your posts, or knowing if your dental hygiene is going to suffer. For such situations, we generally have to wrap this up into business logic.

This is where relationship-based access control (ReBAC) comes in, offering a nuanced approach to accessing resources without codifying that into the applications.

In this session, we'll look at how to define these relationships, experience live demos, and discover how we can deploy our own fine-grained authorisation service. Expect some tail-wagging insights and a few laughs as we explore access control from a canine's point of view.

KCDC 2023 Sessionize Event

June 2023 Kansas City, Missouri, United States

NDC Oslo 2023 Sessionize Event

May 2023 Oslo, Norway

NDC Sydney 2022 Sessionize Event

October 2022 Sydney, Australia

NDC Melbourne 2022 Sessionize Event

June 2022 Melbourne, Australia

DDD Perth 2021 Sessionize Event

August 2021 Perth, Australia

NDC Sydney 2019 Sessionize Event

October 2019 Sydney, Australia

Ben Dechrai

Disaster Postponement Officer

Kansas City, Missouri, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top