David Elgueta
AWS Certified AI Practitioner & GDG Speaker specializing in Agentic Workflows and AI-Driven Productivity
Orlando, Florida, United States
Actions
With over 25 years of experience architecting high-performance applications, I thrive at the intersection of technical depth and product discovery. My career is defined by a unique duality: I am a technical builder who has developed AI-powered workflows and an elite educator who has trained 500+ professionals, achieving a 90% average learner satisfaction rate.
I hold the AWS Authorized Instructor (AAI) Champion status, the highest tier recognized by AWS for exceptional teaching quality and technical expertise. My deep platform fluency led to my selection as an AWS Subject Matter Expert (SME) to develop official certification exams, where I helped write and evaluate the standards for global AWS certification programs.
Today, as a Senior Technical Communicator and AI Product Contributor, I lead the discovery and design of AI Assistants and autonomous workflows. I don't just talk about AI; I build it:
- Agentic Workflows: I design and prioritize feature backlogs for integrated AI Assistants across DCM, BI, and Knowledge-Base products.
- Prompt Engineering: I developed a prompt-engineered RFP response toolkit that enabled a 50% shortlist rate.
- Community Leadership: I am an active speaker at Google Developer Groups, sharing hands-on lessons on building AI agents and demystifying prompt engineering for developer audiences.
My expertise is backed by a robust certification portfolio, including AWS Machine Learning – Specialty and AWS Certified AI Practitioner. As I expand my fluency into Azure AI, I bring a multi-cloud perspective to the challenge of leveraging company knowledge to build intelligent assistants that boost productivity and streamline enterprise workflows.
Area of Expertise
Topics
Spec It, Ship It: A Hands-On Workshop in Spec-Driven Development for AI Agents
Most developers have now felt both sides of AI-assisted coding: the rush of a demo built in twenty minutes, and the dread of opening that same codebase two weeks later. Spec-driven development is the workflow that bridges those two moments — and the only way to really learn it is to do it.
This is a hands-on workshop, not a lecture. We'll start with a small, deliberately messy vibe-coded app — the kind of thing every attendee has written at least once. Together, we'll diagnose what's actually wrong with it: not the syntax, but the assumptions the agent silently made when nobody told it what "done" meant.
Then you'll write a spec. A real one, using GitHub Spec Kit, with the six elements that separate a spec from a wish: outcomes, scope boundaries, constraints, prior decisions, task breakdown, and verification criteria. We'll walk through each element together, then you'll apply it to the workshop app in your own environment. I'll be circulating to help when the spec fights back — which it will, because writing a spec forces the decisions vibe coding lets you postpone.
With the spec in hand, you'll hand it to your AI coding agent and watch it regenerate the feature. We'll compare what came out against what you vibed into existence an hour earlier. The differences are the lesson.
By the end of the session, you'll have a working spec, a regenerated app, and — more importantly — a workflow you can take back to your own codebase on Monday.
* Prerequisites (attendees should arrive with)
- A laptop with Node.js and Git installed
- An AI coding agent of their choice already set up: GitHub Copilot, Cursor, Claude Code, or any Spec Kit–compatible agent
- A working familiarity with one programming language (the workshop will use TypeScript, but the patterns are language-agnostic)
- Optional: GitHub Spec Kit pre-installed (we'll cover installation, but starting with it ready saves ten minutes)
A setup guide will be shared one week before the event.
* Key Takeaways
- Hands-on experience writing a spec that an AI agent can actually execute against
- Working knowledge of GitHub Spec Kit's core workflow: specify, plan, tasks, implement
- A real diagnostic skill — learning to spot the moments where an agent is about to "answer for you"
- A reusable spec template you can apply to your own projects immediately
- A clear sense of when spec-driven development is worth the upfront cost and when iterative prompting is still the right call
* Level: Intermediate
* Format: Hands-on Workshop, 90 minutes
* Prerequisites (attendees should arrive with)
- A laptop with Node.js and Git installed
- An AI coding agent of their choice already set up: GitHub Copilot, Cursor, Claude Code, or any Spec Kit–compatible agent
- A working familiarity with one programming language (the workshop will use TypeScript, but the patterns are language-agnostic)
- Optional: GitHub Spec Kit pre-installed (we'll cover installation, but starting with it ready saves ten minutes)
A setup guide will be shared one week before the event.
Building the Multi-Agentic AI Workforce: From Prompt Engineering to Automated Workflows
Move beyond the chat box and learn how to build a coordinated "team" of AI agents. In this hands-on lab, we will build a multi-agent system designed to handle workflows inspired by real-world applications. Leveraging Azure AI Foundry, we will transition from basic prompting to sophisticated orchestration, with a focus on accuracy, safety, and business value.
Participants will build:
- The Researcher Agent: A specialized agent using Azure AI Search and RAG (Retrieval-Augmented Generation) to extract and synthesize complex requirements from unstructured PDFs or compliance documents.
- The Analyst Agent: A governance-focused agent that utilizes Azure AI Content Safety and custom evaluation loops to audit outputs for bias, hallucinations, and alignment with responsible AI frameworks.
- The Writer Agent: A generative agent that consumes the verified data from the Researcher and Analyst to produce compliant, highly tailored professional responses.
Deliverables for your toolkit:
* System Prompt Library: A collection of production-ready persona instructions for specialized agent roles.
* Orchestration Template: A GitHub-style repository featuring an agent loop built on Python/Jupyter and Azure AI Prompt Flow.
* 1-page Governance Checklist: A practical framework for auditing AI-generated outputs to ensure reliability in high-stakes environments.
Prompt Engineering: There Are No Stupid Questions
Discover how a well-crafted prompt can unlock the full potential of generative AI models, guiding them to produce exceptional results. Learn about common pitfalls and best practices to avoid, ensuring your AI interactions are always insightful and productive.
Whether you're a seasoned AI developer or just starting your journey, this session will equip you with the knowledge and tools to become a master of prompt engineering.
Building Autonomous AI Agents: Practical Architecture from AWS to Azure
Moving beyond simple chatbots, AI Agents represent the next frontier in automation, capable of reasoning, using tools, and executing complex workflows. In this technical session, I pull back the curtain on the architecture behind modern AI agents.
Drawing from my experience as an AWS Authorized Instructor Champion and a Senior Technical Communicator, I will compare the "Agentic" capabilities of Amazon Bedrock and Azure AI Foundry
What you will learn:
- The "Reasoning" Loop
- Orchestration Frameworks
- Real-World Implementation
- Building for Reliability
Preferred Duration: 30 minutes (Technical Session).
Target Audience: Developers, Solutions Architects, and Product Managers.
Technical Level: 200–300 (Intermediate/Advanced).
Previous Experience: I have delivered similar technical content at GDG Central Florida and GDG Tampa.
Real-Life Context: My insights come from hands-on work developing AI-powered tools and workflows, including prompt-engineered toolkits.
Vibe Coding Got You to a Demo. Spec-Driven Development Gets You to Production.
A year ago, vibe coding went viral. Describe what you want, let the agent build it, ship the demo. It lowered the floor and made prototyping faster than anyone expected. It also introduced a surplus of code nobody wants to maintain — architectural drift, silent contract breaks, and the agent's favorite move: confidently building the wrong thing.
Spec-driven development is the pattern that raised the ceiling. The specification becomes the source of truth, and code becomes a generated artifact that implements it. If the code and the spec disagree, you fix the code, not the spec. It sounds like waterfall. It isn't — because when regeneration takes minutes instead of months, the economics of "getting the spec right" completely change.
In this session, we'll build the same small app twice. First the vibe-coded version: fast, demo-ready, and quietly broken in the ways that matter. Then the spec-driven version using GitHub Spec Kit, walking through the six elements every agent-ready spec needs — outcomes, scope boundaries, constraints, prior decisions, task breakdown, and verification criteria. We'll look at where the two versions diverge, what kinds of defects each approach catches, and where SDD is genuinely better versus where it's just extra process.
You'll leave with a working mental model for when to reach for vibes, when to reach for specs, and how to tell the difference before your users tell you.
-Level: Intermediate
-Format: Technical Session, 35–45 minutes
DevFest Florida Orlando 2024 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top