Travis Smith
Director of Engineering at Shift Interactive
Des Moines, Iowa, United States
Actions
Travis Smith is the Director of Engineering at Shift Interactive, a lead engineer on teams that build applications across healthcare, agriculture, insurance, and other industries. Over his 17+ year career, he's worked with organizations ranging from Fortune 500 companies to startups.
Travis is a passionate advocate for Test-Driven Development and Extreme Programming practices. He speaks regularly at Des Moines-area meetups and conferences, covering topics including DORA metrics implementation, branching strategies, delivering software extremely quickly, DevOps, Claude Code, and automated testing for AI features.
Based in the Des Moines metro, Travis is dedicated to helping engineers grow throughout the Midwest.
Area of Expertise
Topics
Accelerating DevOps Practices with Claude Code
DevOps succeeds when teams can focus on collaboration, monitoring, and continuous improvement rather than wrestling with YAML syntax and infrastructure boilerplate. This talk demonstrates how Claude Code can eliminate manual toil in setting up CI/CD pipelines, infrastructure as code, and monitoring configurations - freeing teams to concentrate on the strategic DevOps work that actually moves the needle.
We'll live-code a complete DevOps pipeline using Claude Code, Terraform modules for AWS infrastructure, GitHub Actions for deployment automation, and monitoring setup with proper alerting. More importantly, we'll discuss what Claude Code can't do - the cultural shifts, incident response planning, and architectural decisions that define successful DevOps transformations.
You'll leave with practical examples of where AI coding assistants shine in DevOps workflows, and clarity on where human expertise remains irreplaceable.
Automated Testing for AI Features
AI-powered features are becoming ubiquitous, but how do you test them when responses are non-deterministic? Traditional assertions fall short with LLM integrations. This talk explores practical strategies for automated testing of AI features.
We'll tackle unique AI testing challenges: managing non-deterministic, validating safety guardrails against harmful outputs and PII leaks, and observability practices for debugging AI test failures.
Walk away with concrete patterns for building confidence in your AI features without breaking the bank or sacrificing reliability.
Securing Custom MCP Servers
Learn essential techniques for implementing authorization in custom MCP server deployments. We’ll explore configuration approaches for authentication, role-based access patterns, and security boundaries between AI clients and your custom servers. Practical examples demonstrate how to build robust authorization layers that protect sensitive resources while maintaining system performance.
Implementing Dora Metrics: A Practical Example
This session walks through implementing DORA metrics. It covers what the metrics are, what goals should be focused on for the metrics, what questions need to be answered to implement the metrics, and a practical strategy for implementing the DORA metrics within an organization. This is based on the knowledge and experience that I have gained through implementing them within my organization.
Travis Smith
Director of Engineering at Shift Interactive
Des Moines, Iowa, United States
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top