Speaker

Robert Herbig

Robert Herbig

AI Practice Lead at SEP

Indianapolis, Indiana, United States

Actions

Robert Herbig is passionate about leading teams to deliver products users love.

As a Lead Software Engineer and AI Practice Lead at SEP—a software product design and development company—Robert brings over 15 years of experience building solutions across embedded, desktop, web, and cloud platforms. He’s worked on everything from small apps to large-scale enterprise systems in industries including aerospace, medical, agriculture, construction, and secure communications.

In his dual role, Robert not only leads teams in building high-quality software but also guides clients and peers in exploring how AI can solve complex problems and create meaningful impact. He focuses on helping teams understand what AI can do, where it fits, and how to integrate it into products that users truly value.

Area of Expertise

  • Information & Communications Technology

Topics

  • AI

Avoiding False Starts With Artificial Intelligence

Artificial Intelligence (AI) is no longer science fiction; it’s here today, and it’s here to stay. It is in the products you use every day: home automation, digital assistants, or credit card fraud detection, just to name a few.

All businesses will be affected by AI in the coming years, and the impact will be significant. The only remaining question is, how will you influence its effect on your company?

Getting started with AI is a daunting task, but necessary for businesses who want to stay competitive. During this session, we’ll discuss:

* How to determine if, where, and how to use AI effectively within your organization
* When and how to build an AI team
* Common early mistakes and pitfalls when getting started with AI
* Typical misconceptions around AI and its application
* What to look for in an AI partner or potential hire

Escaping the Trap of Self-Sabotaging Meetings

Meetings are so notorious for draining productivity that historic sabotage manuals listed them as a deliberate tactic. The uncomfortable reality is that modern teams still fall into the same traps unintentionally. Too many attendees, fuzzy objectives, and decisions that never quite stick. These failures compound into feedback loops: too many meetings leave no time to prepare, meetings run long, people multitask, and effectiveness drops even further. By the time everyone joins the call, the outcome is often already determined.

This talk focuses on how to break those loops before a meeting begins. Using lessons pulled straight from the sabotage playbook, we will contrast common meeting anti-patterns with practical countermeasures. This includes choosing the right type of meeting, deciding who actually needs to attend and who does not, designing agendas that force clarity, shaping environment and logistics for focus, and closing with explicit decisions and action items. Attendees will leave with reusable heuristics, agenda patterns, and follow-up structures they can apply immediately to design meetings that respect time and produce outcomes. Or, to misquote Arleen Lorrance, be the meeting you want to see happen.

This talk has been given at KCDC 2023 and CodeMash 2026

Can We Learn to Manage Uncertainty? Probably!

When we’re asked when something will be done, it’s tempting to answer the question. “It’ll be done on March 32nd” or “it’ll take 182.5 days” or “we need 15 sprints”. It doesn’t matter if that answer is the best-case, average, or worst-case scenario.

The answer is fundamentally wrong because using a single value hides the fact that what we really meant was a distribution of possible dates, durations, or outcomes. The exact value is uncertain. Development may be faster or slower than we thought. What if the tech lead wins the lottery and retires? What if a global pandemic forces us to change the way we work?

While we can’t control any of those factors, we can be mindful of their existence and communicate more clearly. In this talk we will introduce “bet” language, which makes uncertainty and luck an explicit part of the conversation. This mindset helps us accurately assess risks, pick the right risks to embrace, and avoid analysis paralysis. It also helps us learn in an uncertain world, where even if we make good decisions, we may experience bad outcomes (and vice versa!).

What Does Security Look Like When Building AI?

Anyone who is working with AI or considering doing so should care about security. When building an AI-powered system or product, the traditional attack surfaces and mitigations still apply. However, AI introduces new attack surfaces depending on the techniques used, and its higher levels of automation mean small failures or compromises can be amplified quickly, increasing both the speed and scale of harm.

In this talk, we’ll discuss how AI systems share many attack vectors with traditional software, and what those attacks look like in practice. We’ll also examine AI-specific attacks such as data poisoning, prompt injection, model extraction, and inference-based data leakage, using real-world incidents across generative AI, machine learning, computer vision, and optimization systems. For each class of attack, we’ll focus on system-level mitigations and the tradeoffs involved, rather than one-size-fits-all solutions.

You don’t need prior experience with AI or security to benefit from this session. You’ll see practical techniques for building more secure AI-enabled software, develop a clearer mental model of AI-specific risks, and leave better equipped to continue learning as the AI security landscape evolves.

This talk was given at CodeMash 2024, StirTrek 2024, and Momentum 2024. However, the content has been refreshed for recent changes in the landscape, breaches, and mitigations.

What Are AI Agents, Anyway?

AI agents are everywhere right now, but many teams struggle to turn the idea into something that actually ships. The promise is appealing: give a system a goal, expose a set of tools, and let it plan and act on its own. In practice, agents are easy to prototype and hard to make reliable, safe, and cost-effective. Developers are left asking what really makes something an agent, how much autonomy is too much, and how these systems differ from the LLM-based tools they already use.

This talk focuses on the practical mechanics of building AI agents. We’ll look at how agents are structured, how tool use works in real systems, and how protocols like MCP shape agent architectures. We’ll also show how agent-based designs reduce custom orchestration code by shifting control flow from hand-written pipelines to reusable tools and declarative goals. Along the way, we’ll discuss concrete implementation choices, including model selection tradeoffs, tool boundaries, memory strategies, and evaluation approaches like task completion and cost tracking. Rather than treating agents as a binary, we’ll frame them as a set of patterns and tradeoffs developers can apply intentionally, helping you decide when an agent is the right abstraction and when simpler automation will deliver more value.

I have given this talk previously at dev up 2025 and IndyPy2025

What We Didn't Build and Why

Good engineering is as much about choosing what not to build as shipping features. We see asks that sound good on the surface yet will not pay back. Seeing the risk is easy; saying no without damage is hard. In consulting this can feel backward: why turn down paid work? Because a bad yes is the fastest way to lose trust.

Even when we don't get to pick the work, we can shape it. Conversations start with translating technical concerns into into the language of the business: budget, time, outcome tradeoffs, risk, and compliance. Then we offer options between 'yes' and 'no', such as: prototypes with clear go or no-go criteria, keeping human experts in the loop when harm is possible, and buying over building when the space is commoditized. By keeping the conversation going and progress visible, decisions are reversible.

Learning outcomes
* Reframe risky asks in business terms, not technical jargon
* Spot product, org, and data red flags early
* Propose buy, baseline, pilot, or human-in-loop alternatives
* Say no rarely, with documented rationale and referrals

Shipping AI Agents Safely

AI agents are delivering real value in production. They take a goal, plan steps, call tools, and adapt as results come back: retry, branch, or pick a different tool. That small degree of autonomy is a force multiplier, great when the next step is right and costly when it isn’t, from security flaws to data exposure.

This talk is about securing the entire AI agent, including Model Context Protocol (MCP) tools and servers. We treat the agent as a system: planner, tools, memory, retrieval, and orchestration. We focus on controls that hold up under change: least-privilege action scopes, human approvals for risky steps, simple information-flow labels for sensitive data (IFC), retention and redaction by design, and operational guardrails with audits and kill switches. MCP is one solid way to express and enforce these boundaries, not the only one. We focus on what to do next week: tighten action scopes, label sensitive data, add approval paths for risky steps, and log the decisions the agent makes. Mistakes should be contained and diagnosable.

Learning outcomes
* Design least-privilege action scopes and approval points
* Map and control sensitive data flows across tools and memory (IFC)
* Threat-model prompts, retrieval, and tool chains with countermeasures
* Instrument audits, privacy logs, and kill switches that produce evidence

We Shipped an AI Agent. Users Didn’t Care.

Shipping still matters. We shipped an AI agent, which is rare right now. Users didn’t care. That isn’t a personal failure, it’s feedback. Value comes from outcomes and what we learn on the way. The real edge is the data we control and the speed we turn experiments into decisions.

In this talk, we treat “not used” and “did not help” as valid, valuable results. We use time-to-truth experiments with explicit kill criteria, so a no-value feature is retired quickly, not defended. We'll compare a product implemented as a simple orchestrated AI workflow, an AI agent, and using a commercial off-the-shelf offering. Building the best products means focusing on what competitors and big tech cannot copy: your data, connections, and unique insight. Even if a feature or app is canceled, the work done can leave durable wins across the platform, from cleaner data contracts to stronger monitoring and evaluation.

Learning outcomes
* Reframe identity: decouple self-worth from features, treat negative results as progress
* Design time-to-truth experiments with clear kill criteria and observability
* Choose buy versus build to avoid racing competitors or tech platform giants
* Leverage proprietary data as a differentiator

LLMs Gonna LLM: Designing Systems That Fail Safely

Modern AI models, especially LLMs, behave in ways that feel useful, magical, and bizarre, sometimes all at once. They produce answers that look expert: clean grammar, domain jargon, neat formatting, even convincing code. That polish feels trustworthy and often is, yet polish is not proof. We still verify, and we design for failure because LLMs are gonna LLM.

Design like the AI model is a powerful, untrusted library: keep the surface small, permissions narrow, and effects contained. Claims carry their own evidence, and important steps get programmatic checks. Treat prompts and tool calls like code, with golden tests and contracts to catch drift. Dark release first, then stage changes based upon real telemetry. Keep ownership visible through audit trails and effective logging. We ground each practice in real failure modes, in plain language. The result is simple: failures stay visible and containable, and new power becomes safe to ship, from deleting a meeting to sending a Slack to reconciling a ledger.

Learning outcomes
* Distinguish polish from proof, and verify before we rely
* Recognize model-typical failures and where they surface in code
* Apply containment, verification, and testing to keep blast radius small
* Keep accountability human, with owners, audit logs, and human-in-the-loop where necessary

AI vs Regex: When Is Regex Better?

AI is a tempting default when a product needs to scale automation. It is broadly applicable, and it feels safer than writing brittle rules. But general-purpose models can underperform specialized techniques on narrow, high-precision tasks, and their failure modes can be hard to spot. This talk is a case study on scoring assignments against a rubric where our AI approach did not meet expectations.

In this talk, we show how to decide when AI is necessary and when simpler techniques are more reliable, using a real-world case study of a startup that tried generative AI (GenAI), natural language processing (NLP), and machine learning (ML) approaches to solve a core problem. We’ll walk through where we used AI, where it was insufficient and why, and what we did instead, including deterministic techniques. The upside is broad coverage on messy inputs, the cost is drift, latency, and harder-to-explain errors. We apply three controls: define “good” with concrete success criteria, prioritize what failures matter based on user impact, and watch for drift when the model or data changes. For engineers and teams who automate text or code understanding, you’ll leave able to choose the right tool, evaluate tradeoffs, and debug predictable failure modes.

This talk has previously been given at CodeMash 2023, KCDC 2024, and dev up 2025.

CodeMash 2026 Sessionize Event

January 2026 Sandusky, Ohio, United States

dev up 2025 Sessionize Event

August 2025 St. Louis, Missouri, United States

Stir Trek 2025 Sessionize Event

May 2025 Columbus, Ohio, United States

Momentum 2024 Sessionize Event

October 2024 Cincinnati, Ohio, United States

KCDC 2024 Sessionize Event

June 2024 Kansas City, Missouri, United States

Stir Trek 2024 Sessionize Event

May 2024 Columbus, Ohio, United States

CodeMash 2024 Sessionize Event

January 2024 Sandusky, Ohio, United States

Robert Herbig

AI Practice Lead at SEP

Indianapolis, Indiana, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top