Session

Please Explain, AI!

We’re comfortable when AI gives the right answer. But can we trust why it gave that answer? This talk dives into the crux of LLM adoption - "explainability".

I’ll walk through a practical explainability stack you can actually ship into production:
• Token attribution: IG, SHAP to see which words mattered.
• Attention viz. & prompt tracing: making hidden attention flows visible.
• Logit-lens & neuron probes: surfacing what layers really “know.”
• Counterfactual testing: nudging inputs to test model sensitivity.
• Chain-of-thought: when to trust it, and when it’s a dangerous mirage.

The novelty is simple: explainability as a design layer for real products, not a research toy. This will be an interactive session: we’ll probe a live model (one commercial LLM API vs a small open-source SLM) together and watch its reasoning “light up” across layers. By the end, you’ll leave with practical patterns to embed transparency into GenAI systems, so trust is architected in, not retrofitted later.

Indranil Chandra

Architect ML & Data Engineer @ Upstox

Mumbai, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top