Session
Building Reliable LLM Apps on Azure Databricks - Grounded RAG, Cite‑or‑Fail, and Telemetry
Build LLM apps that leaders can trust without heavy CI/CD. In this hands‑on workshop, you’ll implement grounded retrieval on Azure Databricks, combine keyword and vector search and enforce “cite‑or‑fail” so every answer shows sources or refuses safely. Add safety filters (PII/toxicity), light policy‑as‑code tests, and runtime evaluation to catch regressions before users do. We’ll instrument telemetry & traces that link prompt --> data --> answer for auditing and debugging, then discuss simple rollout patterns (shadow/canary) and cost/performance tuning. You’ll leave with a working notebook repo, datasets, and a practical checklist to move from prototype to reliable production on Databricks.
Shaurya Agrawal
Startup CTO & Board Advisor
Austin, Texas, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top