Session
Can We Trust Al-Generated Code? Maybe We've Been Asking the Wrong Question.
No one trusts AI-generated code. It looks right. It sounds confident. But does it actually do what we expect?
Having AI test its own work doesn’t help. If we can’t trust it to write code, why would we trust it to write tests after the fact? That’s not verification; it’s an echo chamber.
That leaves us manually checking everything. The safest bet is to assume it’s wrong and review every line yourself, which doesn’t exactly scream “productivity boost.”
So what’s the alternative?
Maybe we’ve been looking at this the wrong way. AI might be trustworthy, but only if we rethink how we guide it. What if there were a way to ensure it understands intent before it writes a single line of code? A way to catch mistakes before they happen instead of fixing them afterward?
An excited AI developer advocate and a cynical senior engineering manager take the stage to debate whether AI-driven development is finally ready for prime time or just another way to get things wrong.

Baruch Sadogursky
Principal Developer Advocate At Large
Nashville, Tennessee, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top