
Robert Herbig
AI Practice Lead at SEP
Indianapolis, Indiana, United States
Actions
Robert Herbig is passionate about leading teams to deliver products users love.
As a Lead Software Engineer and AI Practice Lead at SEP—a software product design and development company—Robert brings over 15 years of experience building solutions across embedded, desktop, web, and cloud platforms. He’s worked on everything from small apps to large-scale enterprise systems in industries including aerospace, medical, agriculture, construction, and secure communications.
In his dual role, Robert not only leads teams in building high-quality software but also guides clients and peers in exploring how AI can solve complex problems and create meaningful impact. He focuses on helping teams understand what AI can do, where it fits, and how to integrate it into products that users truly value.
Links
Area of Expertise
Topics
Avoiding False Starts With Artificial Intelligence
Artificial Intelligence (AI) is no longer science fiction; it’s here today, and it’s here to stay. It is in the products you use every day: home automation, digital assistants, or credit card fraud detection, just to name a few.
All businesses will be affected by AI in the coming years, and the impact will be significant. The only remaining question is, how will you influence its effect on your company?
Getting started with AI is a daunting task, but necessary for businesses who want to stay competitive. During this session, we’ll discuss:
* How to determine if, where, and how to use AI effectively within your organization
* When and how to build an AI team
* Common early mistakes and pitfalls when getting started with AI
* Typical misconceptions around AI and its application
* What to look for in an AI partner or potential hire
Escaping the Trap of Self-Sabotaging Meetings
Meetings are so notorious for draining productivity that historic sabotage manuals listed them as a tactic of choice. The trouble is, modern meetings often fall into the same traps by accident: too many people invited, vague agendas, and decisions that never stay decided. The result isn’t just a wasted hour, but a feedback loop: too many meetings leave no time to prepare, which makes meetings run longer, which forces multitasking, which makes them even less effective.
In this talk, we show how to break those feedback loops before they start. Using lessons lifted from the old sabotage playbook, we’ll contrast common anti-patterns with practical counter-moves: picking the right type of meeting, deciding who actually needs to be there, shaping the environment and logistics, and closing with action and accountability. The aim is simple: meetings that respect time and deliver outcomes. Or, to misquote Arleen Lorrance, be the meeting you want to see happen.
Learning outcomes
* Spot and break meeting feedback loops
* Apply countermeasures to sabotage anti-patterns
* Choose the right meeting type and environment
* Use deliberate prep to respect time and attention
When Simple Beats AI: Regex, Rules, and Results
AI feels like the obvious answer when we need to scale automation. It is broad and flexible, yet these general-purpose AI models can add cost, latency, and drift where simple, deterministic tools would do. This talk walks a real product case: scoring assignments against a rubric. We show where GenAI/NLP/ML underdelivered, and how Regex, static checks, and tiny heuristics anchored results. You’ll get a practical chooser: a quick pattern test, basic economics for latency and cost, and controls that keep failures visible. We also cover hybrid routes, where rules handle the easy wins and AI takes ambiguity. No AI fluency required, just a desire to pick the right tool.
Learning outcomes
* Choose AI vs simple tools with a clear, five-question rubric
* Design hybrid routes: rules first, AI on ambiguity
* Instrument latency, cost, and drift as acceptance criteria
* Reduce incidents by keeping failures observable and testable
Can We Learn to Manage Uncertainty? Probably!
When we’re asked when something will be done, it’s tempting to answer the question. “It’ll be done on March 32nd” or “it’ll take 182.5 days” or “we need 15 sprints”. It doesn’t matter if that answer is the best-case, average, or worst-case scenario.
The answer is fundamentally wrong because using a single value hides the fact that what we really meant was a distribution of possible dates, durations, or outcomes. The exact value is uncertain. Development may be faster or slower than we thought. What if the tech lead wins the lottery and retires? What if a global pandemic forces us to change the way we work?
While we can’t control any of those factors, we can be mindful of their existence and communicate more clearly. In this talk we will introduce “bet” language, which makes uncertainty and luck an explicit part of the conversation. This mindset helps us accurately assess risks, pick the right risks to embrace, and avoid analysis paralysis. It also helps us learn in an uncertain world, where even if we make good decisions, we may experience bad outcomes (and vice versa!).
Building Secure AI
Anyone who is working with AI or considering doing so should care about security. When considering building an AI-powered system or product, the traditional attack surfaces and mitigations still apply. However, new attack surfaces can be present depending on the specific AI approaches used. In addition, due to the typically higher level of automation in AI systems, they can do more harm if they are compromised.
In this talk, we’ll discuss how AI has the same attack vectors as traditional software, and what those attacks look like. We’ll also discuss new attacks that are specific to generative AI (e.g. LLMs like ChatGPT), machine learning & computer vision systems, and optimization techniques. For each type of attack, we’ll point out how they can be thwarted, or at least mitigated.
Previous experience with AI and security are not required to benefit from the session. Attendees will see techniques to help write more secure AI-enabled software. They will walk away with a better understanding of AI-specific attack vectors and their mitigations. They will be equipped to find security education resources in the future.
Previous experience with AI and security are not required to benefit from the session. The goal is not to teach attendees the intricacies of the techniques, but rather to give them the lay of the land and the key terms to google when they leave.
What Are AI Agents, Anyway?
AI agents are everywhere in the conversation, but their real value is harder to pin down. The promise is simple: systems that can take a goal, plan steps, and adapt with minimal oversight. The challenge is knowing when that promise pays off and when it doesn’t.
In this talk we’ll look at how to extract value from agentic systems. We’ll cover what makes an AI agent useful, why you might choose one over simpler automation, and the challenges that come with autonomy. We’ll show how tool use and protocols like MCP fit into the picture, and close with testing and evaluation: how to measure whether an agent is doing real work or just burning dollars.
Learning outcomes:
* Identify where AI agents provide leverage over traditional automation
* Recognize tradeoffs in autonomy and tool use
* Evaluate whether an agent is delivering value
* Anticipate common failure modes in agent implementations
I have given this talk previously at dev up 2025
What We Didn't Build and Why
Good engineering is as much about choosing what not to build as shipping features. We see asks that sound good on the surface yet will not pay back. Seeing the risk is easy; saying no without damage is hard. In consulting this can feel backward: why turn down paid work? Because a bad yes is the fastest way to lose trust.
Even when we don't get to pick the work, we can shape it. Conversations start with translating technical concerns into into the language of the business: budget, time, outcome tradeoffs, risk, and compliance. Then we offer options between 'yes' and 'no', such as: prototypes with clear go or no-go criteria, keeping human experts in the loop when harm is possible, and buying over building when the space is commoditized. By keeping the conversation going and progress visible, decisions are reversible.
Learning outcomes
* Reframe risky asks in business terms, not technical jargon
* Spot product, org, and data red flags early
* Propose buy, baseline, pilot, or human-in-loop alternatives
* Say no rarely, with documented rationale and referrals
Shipping AI Agents Safely
AI agents are delivering real value in production. They take a goal, plan steps, call tools, and adapt as results come back: retry, branch, or pick a different tool. That small degree of autonomy is a force multiplier, great when the next step is right and costly when it isn’t, from security flaws to data exposure.
This talk is about securing the entire AI agent, including Model Context Protocol (MCP) tools and servers. We treat the agent as a system: planner, tools, memory, retrieval, and orchestration. We focus on controls that hold up under change: least-privilege action scopes, human approvals for risky steps, simple information-flow labels for sensitive data (IFC), retention and redaction by design, and operational guardrails with audits and kill switches. MCP is one solid way to express and enforce these boundaries, not the only one. We focus on what to do next week: tighten action scopes, label sensitive data, add approval paths for risky steps, and log the decisions the agent makes. Mistakes should be contained and diagnosable.
Learning outcomes
* Design least-privilege action scopes and approval points
* Map and control sensitive data flows across tools and memory (IFC)
* Threat-model prompts, retrieval, and tool chains with countermeasures
* Instrument audits, privacy logs, and kill switches that produce evidence
We Shipped an AI Agent. Users Didn’t Care.
Shipping still matters. We shipped an AI agent, which is rare right now. Users didn’t care. That isn’t a personal failure, it’s feedback. Value comes from outcomes and what we learn on the way. The real edge is the data we control and the speed we turn experiments into decisions.
In this talk, we treat “not used” and “did not help” as valid, valuable results. We use time-to-truth experiments with explicit kill criteria, so a no-value feature is retired quickly, not defended. We'll compare a product implemented as a simple orchestrated AI workflow, an AI agent, and using a commercial off-the-shelf offering. Building the best products means focusing on what competitors and big tech cannot copy: your data, connections, and unique insight. Even if a feature or app is canceled, the work done can leave durable wins across the platform, from cleaner data contracts to stronger monitoring and evaluation.
Learning outcomes
* Reframe identity: decouple self-worth from features, treat negative results as progress
* Design time-to-truth experiments with clear kill criteria and observability
* Choose buy versus build to avoid racing competitors or tech platform giants
* Leverage proprietary data as a differentiator
LLMs Gonna LLM: Designing Systems That Fail Safely
Modern AI models, especially LLMs, behave in ways that feel useful, magical, and bizarre, sometimes all at once. They produce answers that look expert: clean grammar, domain jargon, neat formatting, even convincing code. That polish feels trustworthy and often is, yet polish is not proof. We still verify, and we design for failure because LLMs are gonna LLM.
Design like the AI model is a powerful, untrusted library: keep the surface small, permissions narrow, and effects contained. Claims carry their own evidence, and important steps get programmatic checks. Treat prompts and tool calls like code, with golden tests and contracts to catch drift. Dark release first, then stage changes based upon real telemetry. Keep ownership visible through audit trails and effective logging. We ground each practice in real failure modes, in plain language. The result is simple: failures stay visible and containable, and new power becomes safe to ship, from deleting a meeting to sending a Slack to reconciling a ledger.
Learning outcomes
* Distinguish polish from proof, and verify before we rely
* Recognize model-typical failures and where they surface in code
* Apply containment, verification, and testing to keep blast radius small
* Keep accountability human, with owners, audit logs, and human-in-the-loop where necessary
Stir Trek 2025 Sessionize Event
Momentum 2024 Sessionize Event
KCDC 2024 Sessionize Event
Stir Trek 2024 Sessionize Event
CodeMash 2024 Sessionize Event
Momentum 2023 Sessionize Event

Robert Herbig
AI Practice Lead at SEP
Indianapolis, Indiana, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top