Session
AI-Generated Code Is Easy. AI-Generated Trust Is Hard.
Large language models can generate code within seconds. However, production software is not evaluated by speed, but by reliability, traceability, and compliance. In regulated domains, we already know how trustworthy systems are built through requirements traceability, reviews, validation, and audit trails. These patterns are well established. But how do we apply them when code is generated by machines?
How do we ensure reproducibility with probabilistic outputs?
How do we link requirements to generated artifacts?
How do we integrate quality gates into AI-driven development processes?
Based on practical experience, this talk demonstrates how proven software engineering disciplines can be transferred to AI coding agents in order to meet process and quality standards even in regulated environments.
Alexander Lehmann
Software Architect, Inventor of QuineAI
Dresden, Germany
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top