Speaker

Shreya Singhal

Shreya Singhal

AI Applied Scientist at Claritev

Austin, Texas, United States

Actions

Shreya Singhal is an Applied Scientist working on Generative AI and LLM-based systems at Claritev. She specializes in building production-grade AI agents, RAG pipelines, and document intelligence platforms. With a Master’s in Computer Science from UT Austin and experience across applied research and backend engineering, her work focuses on improving the reliability and trustworthiness of real-world AI systems. She has spoken at AI4 Conference and AI Infra Connect and enjoys translating cutting-edge AI research into practical developer patterns.

Area of Expertise

  • Business & Management
  • Health & Medical

Topics

  • AI
  • Developer
  • Software Development
  • Software Deveopment
  • Agentic AI
  • Generative & Agentic AI
  • Agentic AI / Autonomous Agents

Beyond the Prompt: Using Bias Subspaces to Build Algorithmic Guardrails

As Generative AI moves into production-grade enterprise environments, traditional "keyword-based" guardrails are proving insufficient for catching nuanced, latent biases. While most developers focus on surface-level prompt engineering, the true vulnerabilities often lie deeper within the model’s latent representations.In this session, we will explore a more rigorous, research-backed approach to AI safety. Drawing on my research at UT Austin, I will demonstrate how analyzing GloVe embeddings through bias subspaces can reveal hidden correlations between abstract concepts and ingrained prejudices. We will discuss:Identifying Latent Bias: How models separate abstract vs. concrete words and where human-rated concreteness scores diverge from model behavior.Building Mathematical Guardrails: Moving from "black-box" filtering to algorithmic detection of biased vector directions.Real-World Application: How to apply these research insights to harden autonomous agents and multi-model pipelines against ethical failures.Attendees will walk away with a framework for building "Constitutional" guardrails that address bias at the representation level, ensuring more inclusive and reliable AI deployments.

AI That Argues With Itself: Building Self-Debating Systems That Catch Their Own Bugs

Modern AI systems are incredibly capable and confidently wrong.

In this talk, we explore a new architectural pattern: AI systems that argue with themselves. By orchestrating multiple AI agents with opposing perspectives, we can uncover hidden bugs, reduce hallucinations, and dramatically improve output quality without adding human reviewers to the loop.

I’ll demonstrate how to design and implement a self-debating AI system using real-world examples: debugging code, validating architectural decisions, and stress-testing product requirements. We’ll explore when AI disagreement is useful, when it fails, and how to measure improvement beyond “it feels better.”

WeAreDevelopers World Congress 2026 - North America Sessionize Event Upcoming

September 2026 San Jose, California, United States

DeveloperWeek New York 2026 Sessionize Event Upcoming

June 2026 New York City, New York, United States

Shreya Singhal

AI Applied Scientist at Claritev

Austin, Texas, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top