Session
Vibe Coding vs Context Engineering vs Spec-Driven Development
AI can sketch a working feature from a chat prompt in minutes. That speed is real — but so is the wall teams hit when they try to ship it: inconsistent output, missing tests, surprise security gaps, and pull requests nobody wants to own. The problem isn't the AI. It's the absence of a workflow.
This session compares three concrete approaches teams are using right now — Vibe Coding, Context Engineering, and Spec-Driven Development — with honest trade-offs for each. We'll demo vibe coding to show where exploration shines and where it collapses at review time. Then we'll layer in context engineering: plugging in your own docs, API schemas, and tool permissions to get consistent, bounded outputs. Finally, we go spec-first: define constraints and acceptance criteria up front, then let AI generate specs, tasks, and implementation in a reviewable, CI-anchored pipeline.
Every transition is shown live using a small feature built end-to-end, with templates you can bring back to your team on Monday.
Attendees leave able to:
- Differentiate vibe coding, context engineering, and spec-driven development by risk, speed, and governance
- Identify which approach fits exploration vs. delivery vs. production hardening
- Design a lightweight spec-driven workflow that produces reviewable PRs
- Apply context-engineering patterns to improve AI output consistency
- Evaluate trade-offs for existing codebases — not just greenfield
Ron Dagdag
Microsoft AI MVP and Research Engineering Manager @ Thomson Reuters
Fort Worth, Texas, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top