Speaker

Brian Loomis

Brian Loomis

Value + discipline + experience = architecture

East Lansing, Michigan, United States

Actions

Director of architecture at Progress, formerly CTO and VP development at some startups, own my own software architecture consultancy, and lived the .NET & Azure launches at Microsoft. Mostly interested in pattern-based software architecture of software as a service for true customer value (if it can be done on a peloton or with AI, bonus points). Supportive dad with two YAs in college going well beyond where i could imagine, and a supportive spouse on a global fabric journey.

Area of Expertise

  • Business & Management
  • Finance & Banking
  • Government, Social Sector & Education
  • Information & Communications Technology
  • Manufacturing & Industrial Materials

Topics

  • Software Architecture Expert
  • secure software architectures
  • Agentic AI
  • Applied Generative AI
  • DevSecOps
  • Secure Code
  • Scalable Software Architectures
  • cloud-native software architecture

10 practical techniques using AI to re-factor and extend legacy applications

Modernizing legacy applications is often a manual, high-risk endeavor. However, the emergence of AI-native development tools like GitHub Copilot and agentic IDE assistants has introduced a new playbook for architectural refactoring. This 45-minute session moves beyond “code completion” to explore ten high-impact, practical techniques for extending and securing aging codebases. This gives a bit of a roadmap for the software architect to finally ask the questions that took an unbounded time to investigate only a few years ago.

We will demonstrate how to treat your infrastructure and documentation as code-ready context, using AI to perform complex “archaeology” and reconstruction. You will learn how to automate the generation of system diagrams from messy Helm charts and how to synthesize OpenAPI specifications from undocumented endpoints to jumpstart new service integrations.

We will also tackle the “unseen” side of legacy debt: using AI to identify version mismatches in dependencies, verifying code against OWASP security standards, and building automated scalability tests. Finally, we’ll dive into the frontier of Agentic DevOps, showing how to test coordinating groups of agents and validate the accuracy of RAG (Retrieval-Augmented Generation) outputs to ensure your AI-assisted architecture remains grounded in truth.

Audience - software architects generally with legacy modernization concerns
Requirements - none other than understanding of your architecture & design process
Session about 1 hour with Q&A

Agentic patterns for domain driven design

As AIOps and collaboration tools pivot toward “agentic” functionality, many teams struggle to move beyond unreliable chat prompts. The challenge isn’t just “calling an LLM”—it’s architecting a system where agents reliably interact with complex domains, respect business logic, and scale alongside existing microservices.

In this session, we explore Agentic Patterns for Domain-Driven Design, drawing on successful implementations from best-of-breed SaaS platforms. We will dissect the three critical layers of a modern agentic stack—Interaction, Orchestration, and Domain—while focusing on the rigorous engineering required to take a project from concept to code.

Key topics include:

The PRD as Prompt Engineering: How to construct a “spec” that drives planning in IDE assistants. We’ll discuss writing requirements that are machine-readable and context-aware to ensure AI-generated code aligns with your architectural standards.

Testing for Quality Attributes: How to move beyond “vibes-based” testing. We will cover technical validation for scalability, identifying tech debt early in AI-generated modules, and the “Token Tax”—how to model cost (COGS) when tokens drive your unit economics.

The Orchestration Plane: A comparison of the Manager-Delegate pattern versus Serial Agent Tasks, and how to manage execution state at scale.

The Domain & Information Layer: Integrating backend microservices via the Model Context Protocol (MCP) and custom RAG to maintain a “clean” architecture.

From MVP to Mainline: Strategies for transferring your agentic project from a rapid-prototype MVP into a normal development cycle, ensuring it inherits the same CI/CD, observability, and security posture as your core platform.

Audience - software designers adding AI to existing products
Required background - basic familiarity with MCP, RAG, agent orchestration, microservices architecture
Session duration about 1 hour with Q&A

Making AIOps safe for production

The era of writing every infrastructure configuration script by hand is ending. New AIOps tools and AI-assisted platforms can now instantly generate complex Chef recipes, Terraform modules, and shell scripts, directly ingesting them into our DevOps systems for execution. The speed is phenomenal, but the risk is unprecedented. What happens when an AI hallucination—or a subtly malicious prompt injection—instructs your pipeline to drop a database, revoke critical IAM permissions, or misconfigure a load balancer?

This talk is a survival guide for the modern DevOps and SRE practitioner. I will share hard-won experiences from the trenches of integrating agentic AIOps tools with enterprise infrastructure as code (IaC) like Chef. We will break down the exact architectural safeguards required to keep your production environment secure when AI is writing the code.

First, we will dive into automated validation pipelines. You will learn how to implement strict static analysis, linting, and security checks on AI-generated bash, PowerShell, and Ruby before it ever executes. Second, we will discuss the absolute necessity of high-fidelity test environments. If AI is generating the scripts, your staging environment must be a perfect, matched clone of production where these scripts can be safely “detonated” and validated.

Finally, we will examine the paradigm shift in orchestration: the Human-in-the-Loop (HITL) Gatekeeper. We will explore how to design agentic workflows that pause execution for high-risk changes, elevating the DevOps engineer into a safety “approver” role. We aren’t being replaced by AI; our jobs are evolving. Join me to learn how to step up from writing the scripts to orchestrating the agents.

Audience - DevOps engineers evaluating AIOps tooling
Requirements - familiarity with a DevOps tool (infrastructure as code), basic understanding of agents, RAG, MCP
Session duration about 1 hour with Q&A

Privacy in AIOps: Engineering Trust and Provenance in Open Source

The integration of Artificial Intelligence into IT Operations (AIOps) is transforming how we manage infrastructure, but the invisible supply chain of AI—the data itself—requires rigorous governance. When open-source projects gain advanced capabilities like Retrieval-Augmented Generation (RAG) and autonomous agents, maintainers are forced to confront complex privacy realities.

In this session, I will share architectural and legal insights from the trenches of modernizing open-source projects with AIOps. We will break down the challenge of managing a mixed-data environment where sensitive customer data, vendor-supplied data, and publicly scraped information must coexist without cross-contamination. Because securing an on-prem LLM deployment is much more than just repackaging cloud software, we will dive deep into designing AI features for strict, air-gapped environments where “phone home” telemetry and hidden network dependencies are strictly prohibited.

Furthermore, we will demystify the mechanics of data provenance in RAG architectures. Just as modern AI models can provide source citations, we must engineer systems that track which specific documents and chunks influenced an output. You will learn how to map data lineage to ensure auditability, allowing users to trace the exact origins and transformations of the data driving their AIOps tools.

Finally, we will bridge the gap between code and compliance. We will discuss how to craft End User License Agreements (EULAs) that clearly document the purpose of AI workflows, define legal authorization for data usage, and establish hard restrictions on what cannot be done with user data.

Requirements - basic understanding of how privacy is expressed in a EULA
Audience - all levels
Session duration 45 minutes

Brian Loomis

Value + discipline + experience = architecture

East Lansing, Michigan, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top