Speaker

Natalie Somersall

Natalie Somersall

Principal Field Engineer, Public Sector @ Chainguard

Denver, Colorado, United States

Actions

Natalie is a principal solutions engineer at Chainguard serving the public sector market. She spent years designing, building, and leading complex systems in regulated environments at a major systems integrator, but has also taken her career in many other directions - including detours into project management, systems engineering, and teaching.

She’s passionate about diversity in technology and empowering engineers to build better.

Area of Expertise

  • Information & Communications Technology

Topics

  • DevOps
  • DevSecOps
  • Kubernetes
  • Container and Kubernetes security
  • GitHub
  • Industrial and Regulated Environment

Container Escapes 101

Containers aren’t tiny fortresses. They’re leaky rowboats unless you know what you’re doing. This hands-on workshop demystifies container security layer by layer, showing how real-world missteps in runtime, image, and host configurations open doors to escapes, persistence, and lateral movement. We’ll dissect how containers actually work, walk through common isolation failures, and demonstrate how attackers exploit weak assumptions. Whether you’re building, securing, or regulating containerized apps, you’ll leave with a threat model, practical tools, and maybe a new trick or two for _literally_ popping out of the box.

Signing and verifying multi-architecture containers with Sigstore

Multi-architecture containers are magical to use—but a bit arcane to work with. Why does `docker pull python:3` grab only one architecture? How can we verify that the signed one is in use? In this talk, I’ll demystify how the order of operations for container resolution works. We’ll then dive into OCI manifests, image layers, tags, and how those map to annotations like SBOMs, attestations, and signatures. Using this info, we'll map out a couple strategies on generating and verifying this information with Cosign regardless of the architecture we need to use. I’ll walk through real-world weirdness I’ve helped folks through managing multi-arch images at scale, including how some registries and pull-through caches behave unexpectedly. This talk is for folks who use containers daily but want to lay the foundation for their software supply chain security.

A Gentle Introduction to Container Security

Containers transformed modern application deployment, enabling faster development with portable and scalable systems. They also introduce new security risks that are difficult to navigate, particularly when development teams don't understand fundamental infrastructure security principles. Having a threat model of containerized applications is critical for developers, security engineers, and policymakers alike. This talk will break down the key security risks at each layer of the container ecosystem while providing actionable insights for assessing and mitigating threats.

We'll open with **how containers work** to understand our risks across the application's lifecycle. Misconceptions about their security properties lead to dangerous assumptions. While offering process-level isolation, they are not virtual machines. Container isolation is not as strong as assumed, meaning applications inside containers can still be exposed to host-level threats.

Next, we’ll dive into **host (or node) OS risks**, where a shared kernel and a broad attack surface can expose the entire system. We’ll discuss how improper user access rights and file system tampering can lead to privilege escalation. We'll then demonstrate a common container escape to gain persistence and lateral movement on a node's filesystem.

From there, we’ll examine **container runtime risks**, such as vulnerabilities in the runtime software itself and misconfigurations that allow attackers to break out. Application-level security flaws, such as injection attacks and mismanaged secrets, can also be exploited here.

Containers rarely exist on a single system, so next we’ll dive into **orchestrator risks**. Poorly managed administrative access and improper segmentation can lead to unintended data exposure. We'll show a few default Kubernetes configurations that are more risky than they seem and dive into why and how to mitigate those risks.

All of these containers come from a **container registry**. These become a security liability if improperly managed. We’ll discuss threats such as insecure connections to registries, stale or vulnerable images lingering in repositories, and insufficient controls - and ways to mitigate each of them. We'll leave with patterns for commercial software factories that work in the real world.

Next, we'll dive into **container image security risks**, including vulnerabilities within base images, misconfigurations, and the presence of embedded malware or cleartext secrets. The use of third-party images without validation can introduce serious supply chain risks, emphasizing the need for strong image provenance and validation practices - using attacks from the field as examples.

To conclude, we’ll examine **how these risks play out in the real world**, drawing from industry case studies and best practices. Attendees will leave with practical guidance on prioritizing security fixes, assessing risk within their own containerized environments, and speaking about container security within the context of regulatory frameworks like **NIST 800-190** without falling asleep in the process.

By the end of this session, attendees will:
- Understand the full security landscape of containerized applications, including threats across the stack.
- Learn how to assess and triage security risks effectively, prioritizing fixes based on real-world impact.
- Gain the vocabulary to discuss container security within legal and regulatory frameworks, ensuring compliance while maintaining agility.

Whether you’re a developer, security engineer, or policy professional, this session will help you get squared away on container security with confidence.

Whodunnit - git repository mysteries

With all the recent focus on software supply chain security, let's look at the very far left of this process - how does git know who did what, when, where, and why?

It seems straightforward to assume that you have all of this information in a git repository, but that's probably not the case. In this talk, Natalie will walk through how to determine the answers to each of these questions, edge cases and technical gotchas to watch out for, and why each are important to your company's security posture.

**Who?** will walk through identity and commit signing in git. This seemingly simple information turns out to be quite hard to reliably determine. We'll review setting your user identity in git and how/if that links to an external identity provider or your repository hosting service, how that identity interacts with signature verification, the common methods of git commit signing, and what the future of signature verification looks like for git. The walkthrough shows how each of these can leave a gap for auditors and how to address these with reasonable certainty.

**What?** answers, but then finds more questions, to what files changed at each point in time, how force pushing or history rewrites change this, and how to view these trends in bulk. It has become common to rewrite history to remove large files or secrets, but how effective is this in practice? What gaps does it leave in your reporting process to understand what files have changed?

**When?** considers time management in git - local versus universal, when it matters, and looking at how often a file changes through the lens of reporting to someone who doesn't know what git is.

**Where?** walks through several "where" questions - the confusing ways that `git checkout` can do different things to different files based on context, leveraging `CODEOWNERS` to provide ownership to teams over parts of a repository, and where git stores credentials. We'll look at what types of controls these credentials may bypass by looking deeply into the dozen or so types of credentials in GitHub and what they can do. Lastly, we'll consider the places `git` can do things automatically with hooks - hooks that run locally or remotely, how to leverage them responsibly, and what they realistically can do for governance.

**Why?** will consider documenting code changes over time. It's difficult to understand why a change occurred without this context and impossible to go back in time to ask yourself why. This section will cover structuring commit messages, then how to enforce these across projects by force and by cultural convention. Zooming out a little more, we'll talk about merge strategies and how they intersect with trying to prove changes over time. It ends on why and how to add additional insights to this history using an Architecture Decision Record to document major decisions, and why not to use Issues or Pull/Merge Requests for these records.

**No really, why?** ends on the existential questions of what it means to write software in a regulated environment, or pulling entirely unregulated software in. It's possible to do this quickly, reliably, and securely - we just need to know `git` inside and out.

Threat modeling the GitHub Actions ecosystem

GitHub Actions is one of the most popular CI tools in use today. If you need or want to use it for business, though, there are a lot of choices to make that have huge implications to the information security and compliance posture of your organization. These questions get harder with more users and projects, moving faster and not prioritizing security. Some of these questions include:

- What sort of code and dependencies are in a GitHub Action?
- How can those get exploited?
- What information can I know about what my users are up to?
- My users requested that we allow the use of an Action out of the open-source marketplace, but how do we evaluate the security of it?
- Is it safer to host all of my own build machines?

This talk leverages Natalie's experience in building and running large implementations of GitHub Actions in a regulated environment to provide guidance at this intersection of developer enablement and secure, scalable development. In this talk, we'll dive deep into what an Action really is, what goes into an Action out of the marketplace, and how each of the three types of Action can be exploited with a demonstration. With each exploit, a few control strategies will be discussed to counter it.

Build system integrity is about more than _just_ the code in your build pipeline. Once what an Action is is well understood, Natalie will cover how to handle secrets securely at every stage of your CI/CD pipeline and go over common mistakes she sees users make - authenticating into multiple repositories, handling temporary credentials or long-lived service accounts safely, integrating with other secret stores, and scaling with OpenID Connect. Handling secrets in a safe way is critical to maintaining supply chain security (and keeping a sane bill with your cloud provider of choice).

Next, the talk will spend some time outlining how to think through when (or if) to host your own compute or use GitHub's hosted runners for any particular job. The security of using GitHub's hosted compute is examined, followed by key infrastructure decisions and guidelines for hosting your own runners. With each choice, Natalie will highlight strategies to maximize the traceability and minimize the "blast radius" should a runner or build pipeline become compromised - providing the critical information of Who did What When and Where (and Why!) for incident response.

Most importantly, the talk ends on how to build a human-friendly process to govern the ungovernable. It's (usually) not acceptable to "just allow everything for everyone", but the opposite end of that spectrum is building an unwieldy and lengthy approval process. This desire to make sure everyone has their say can instead increase your shadow IT inventory in your build and production infrastructure. Having built this at a large heavily-regulated company, scaled it to thousands of users, and then advised many other companies in the same situation, Natalie recaps her lessons learned in how to evaluate a requested Action by a user. This means setting clear expectations, written evaluation guidelines, and tactical advice on how to add it to your company's pipeline for better, faster, and more secure software.

Getting Started with DevSecOps in a Regulated Environment

Natalie has worked with a great number of teams within system integrators as they've integrated security into their software engineering processes. Given the opportunity to work with such a diverse group of teams, she's found a couple of common traits in teams that thrive in delivering secure software in a regulated environment.

AppSec Village - DEF CON 33 Sessionize Event Upcoming

August 2025 Las Vegas, Nevada, United States

OpenSSF Community Day North America 2025 Sessionize Event

June 2025 Denver, Colorado, United States

BSides Boulder 2025 Sessionize Event

June 2025 Boulder, Colorado, United States

BSides Boulder 2024 Sessionize Event

June 2024 Boulder, Colorado, United States

BSides Boulder 2023 Sessionize Event

June 2023 Boulder, Colorado, United States

DevSecOps Days Rockies - Virtual Sessionize Event

October 2020

Natalie Somersall

Principal Field Engineer, Public Sector @ Chainguard

Denver, Colorado, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top