Jenna Charlton
Developer Advocate at BrowserStack
Cleveland, Ohio, United States
Actions
Jenna is a software tester and developer advocate at BrowserStack who’s been professionally proving things are broken (with love) for over a decade. They’ve graced the stages of dev and test conferences, where they talk about risk-based testing, accessibility (because tech should work for literally everyone), and building agile teams that actually like working together. They’re also known for coaching and cheerleading the next generation of testers—because who else is going to keep the bugs in line? Off the clock, Jenna swaps bug hunts for mosh pits at punk rock shows and high-flying chaos at pro wrestling events with their husband, Bob. They’re also a frequent traveler, though their most important role is “human caretaker” to three feline overlords: Maka, Milton, and Excalipurr.
Area of Expertise
Topics
The Myth of the Cowboy
Everyone likes to feel like the hero. Nothing feels better than charging in on your horse and rescuing the town’s folk from danger! But have you ever considered that playing the hero can harm both you and your team? Agile requires us to eliminate silos, share information, and share the load. As an individual you can’t keep the town safe by yourself, the bigger the team, the more people need to be able to be self-sufficient.
We need to reframe our mindsets from lone cowboy saving the day, to an agile posse protecting the town from outlaw bugs. And while being the hero can feel good in short doses, saving the day can become exhausting for you and disempower your team.
The Mirror in the Machine: Exploratory Testing for the Human Experience
What if the next big breakthrough in your testing strategy didn’t come from a new automation framework or a shiny AI tool, but from you?
In my graduate research, I stumbled upon a social science methodology called autoethnography, a fancy word for a researcher interrogating their own lived experience within a community. And it hit me like a ton of bricks: Autoethnographic research is basically exploratory testing for human experience.
We spend so much time focusing on what the application is doing, but we often ignore the most important quality signal in the room: How the application makes us feel. Are you frustrated? Delighted? Confused? Those aren't just "vibes"—they are critical data points.
In this session, I’m taking you beyond the scripts and into the world of human experience testing. We’ll explore:
- The Science of "Me": What autoethnography actually is and why your personal bias isn't a bug—it's a feature.
- Journaling the Journey: Practical techniques for tracking sentiment, from voice-narrated "field notes" to color-coded bug summary journals.
- Reflective Practice: How to use post-session debriefs and daily reflections to uncover the hidden quality trends your brain usually filters out by the time you've finished your first cup of coffee.
- Quality Advocacy: Shifting the narrative from "I found a bug" to "I identified a trend in human frustration," transforming how you demonstrate value to your team.
You don’t have to be a PhD student to apply these methods. You just have to be a tester who is willing to look in the mirror. Let’s stop making decisions based on invisible vibes and start turning our human intuition into our most powerful quality indicator.
The Gumshoe Protocol
It was a dark and stormy night, the kind of night you expect a P0 defect, when the Teams call interrupted my dinner of Cup of Noodles. It was the VP of Customer Success “we have a customer facing problem. We need you and the Gumshoe Team on the case”.
Root cause analysis (RCA) is a critical skill for everyone, however, most professionals have never had the opportunity to identify the root cause of a defect before needing to do so in a critical situation. Effective RCA requires all stakeholders to think critically and use their best judgement on the often limited information available. In this workshop, participants will role-play through a real-life scenario and interact with logs, users, and other stakeholders to figure out the root cause before coming together to brainstorm on how we could have prevented the incident.
Join the Gumshoe Team of developers, QAs, product professionals, customer support, and project managers to crack the case.
Testing Treasure Maps: The Art of Crafting Charters
Testing with a script is like using GPS, but testing with a charter is like following a treasure map.
At the heart of exploratory testing is the charter. Charters should guide our testing and help us focus our exploration on what's most important and impactful. Unlike scripts generated for test cases, there's an art to crafting charters. Charters help us avoid the dragon of wasteful testing or getting lost in the labyrinth of distraction to help us reach the treasure chest of good enough quality.
We'll discuss adventuring our way through heuristics, assembling our party (personas), side quests to encourage exploration, navigating your way through a test session, and using your compass to help you get back on track.
What You'll Learn:
- Using heuristics in exploratory testing
- Using personas in charters
- Language that invites exploration
- Using charters to avoid scope creep
Navigating the Tool Acquisition Adventure
Decision makers face several risks and challenges when acquiring a new tool. It can be difficult to determine if a tool or vendor is reputable. Communication challenges often lead to misunderstandings about the problem that needs to be solved. Identifying the difference between reality and magical thinking in what the tool can provide is critical to long term success.
Thankfully we can reduce these potential risks and others in tool acquisition by utilizing structured decision making and leveraging best practices in the process. In this workshop we'll step through the process of tool acquisition using a Tool Acquisition Matrix (that you get to keep!). We'll cover:
- General Considerations
- Developing Use Cases
- Skill Inventories
- Completing a Proof of Concept
- Developing Success Criteria
And more!
Join me to gain valuable insight and best practices in tool acquisition so your next tool decision can be your best tool decision!
Key Takeaways:
- There's no such thing as a perfect fit: Identifying must haves and nice to haves
- Test the tool like you test requirements: Creating and using a requirement and evaluation scorecard
- The test drive: Getting the most of proof of concepts and pilots
Black Box Techniques for Unit Tests
One of the greatest strengths of modern development is the ease of unit testing for many languages and frameworks. Obviously, you’re testing your code–but are you thoroughly testing it? Are you testing the right things? Is there more you could be testing?
Enter black box testing techniques. Equivalence partitioning. Boundary value analysis. Decision tables. Combinatorial testing. State transition testing. While black box is done without considering the code, these are the foundational testing techniques and can be applied to any type of testing. You’ll learn about each type of testing and how to apply it to different situations to understand the features better and engage in deeper testing.
Join me as we walk through these five foundational techniques and how we can apply them to our unit tests. You’ll come away from the talk with a stronger understanding of testing and how to go beyond ‘Does it work’ into ‘Does it work well?’
Beyond the Hype: Leading Quality Teams Through the AI Transformation
If your LinkedIn feed is anything like mine, you’d think LLMs have already automated away every testing job. But the reality on the ground, managing flaky tests and shipping software that doesn’t break on a Friday is far messier, and far more human.
As a Developer Advocate who lives and breathes the testing trenches, I see two distinct challenges in the AI transition: the technical (How do we use it?) and the existential (How do we lead our people through the fear of replacement?). Ignoring the latter is building your entire AI strategy on quicksand.
This talk is not about prompt engineering; it's about psycho-social engineering. We will dive deep into the uncomfortable truth: AI feels existential to your team. We'll explore actionable strategies for leaders to shift the narrative from replacement to partnership:
- Acknowledge the Fear: How to create psychological safety by explicitly stating, "AI is here to take the tasks you hate, not the thinking you love."
- Establish Agency: Creating "AI Curiosity Sessions" where testers are empowered to break the AI, proving that the "magic box" is dumb without their human expertise and context.
- Redefine Value: Moving beyond the tired "velocity" pitch to selling AI based on Quality of Life, freeing up humans from boilerplate toil so they can focus on the creative, meaningful testing only a human mind can dream up.
- The New Quality KPIs: Dump the vanity metrics like "lines of code generated." I'll introduce the essential Confidence Markers for the AI era: Time-to-Reliability, Self-Healing Rate, and the crucial Toil Reduction Score.
We are not building "AI Teams"; we are building teams that have better tools. This distinction matters. Join me to learn how to be the filter for your team, removing the fear and the hype to ensure the human element remains central to our craft.
A Note From Your User
We often design for the "happy path" a user who is calm, focused, and sitting at a desk. We don’t design for the user whose hands are shaking, whose vision is blurring, and whose brain is operating on a fraction of its usual capacity. But when software is a medical necessity, your users encounter your work while they are scared, sick, and vulnerable. In this state, a "glitch" isn't just a ticket in a backlog; it is a moment of profound abandonment.
In this session, I share a raw, first-person experience report on navigating a diabetes diagnosis through the lens of using a Continuous Glucose Monitor (CGM). While CGMs are life-saving innovations, the reality of a "buggy" device takes on a terrifying weight when it results in missed lows and dangerously inaccurate readings. Using the Think, Feel, Say UX framework, we will step through the psychological and physiological toll of relying on a device that you can no longer trust.
We will explore the moments where the "technical requirements" were met, but the human requirements were ignored. If your system fails when a user is at their most compromised, you haven't just missed a requirement, you’ve failed a person in crisis.
Key takeaways include:
- Defining the High-Stakes Bug: Learning to identify when a bug is a minor inconvenience versus when it becomes life-threatening.
- The "Think, Feel, Say" Crisis Map: A deep dive into the user's internal state during a device failure.
- Designing for the Compromised User: Practical strategies for building empathy into error states, alerts, and data visualization for users in distress.
This isn’t just a talk about medical devices; it’s a call to action for every creator to recognize the human pulse behind every data point and the weight of the responsibility we carry as builders.
Testing 101 for Devs
In Agile, quality has become a team responsibility. Increasingly developers and non-testers are being asked to test and "shift left" but are rarely given the tools to ensure their testing is up to snuff. This often results in wasted time and effort and costly bugs. In this session we’ll cover some of the basics of exploratory testing, testing terminology, and start to think like testers.
Takeaways
• Session based testing with charters
• Unit testing vs functional testing
• Testing lifecycle
• Shifting left and pairing
• Understanding and communicating risk
• Why automation isn’t always the answer
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top