Speaker

Doneyli De Jesus

Doneyli De Jesus

Principal Architect @ ClickHouse

Montréal, Canada

Actions

Doneyli is a Principal Architect @ ClickHouse, as the primary technical advisor for organizations. He provides deep expertise in crafting tailored AI & Data solutions that align with customer goals. With over 20 years in Data and AI, from development to solution architecture, he leverages his robust skillset to guide organizations toward innovative solutions that transform their operations.

Inside ClickHouse, he is deeply involved in cross-functional product development and deployment collaboration. He is often spearheading demonstrations and presentations to showcase AI capabilities.

Additionally, with his background in executive advising, data strategy, and solutions architecture, he dedicates time to mentoring data professionals, aiming to elevate their careers within the tech industry.

Area of Expertise

  • Business & Management
  • Finance & Banking
  • Information & Communications Technology
  • Real Estate & Architecture
  • Transports & Logistics

Topics

  • Artificial Intelligence (AI) and Machine Learning
  • Machine Learning and Artificial Intelligence
  • Democratized Artificial Intelligence
  • The Future of Artificial Intelligence: Trends and Transformations
  • LLMs
  • Using AI and LLMs
  • LLLM apps at scale
  • Large Language Models (LLMs)
  • ​​​​​​​The Generative AI LLM Revolution (ChatGPT)
  • RAG
  • Retrieval Augmented Generation
  • embeddings
  • Data Analytics
  • Big Data
  • Data Management
  • Data Platform
  • Snowflake
  • clickhouse
  • OLAP
  • Data Science
  • Data Warehousing
  • Data Science & AI

When Your Metrics Lie: Observability for Nondeterministic AI Systems

Your dashboard shows 99.9% uptime, 200ms latency, zero errors. Your users see confident nonsense. When LLMs become part of your stack, traditional observability breaks down - a successful HTTP response tells you nothing about whether the answer was actually useful.

Problem: AI systems introduce nondeterminism that breaks assumptions baked into our monitoring tools. The same request can produce wildly different outputs. An LLM confidently returning wrong information looks identical to success in your APM. Debugging requires full prompt/completion context that traditional trace storage wasn't designed for.

Solution: I'll share a production observability architecture that answers the question APM tools can't: "Was that response actually good?"

The Speed of Intelligence: Why Your AI Ceiling Is Your Data Infrastructure

Your AI agent makes 50 database queries per user interaction. At 1 second per query, users wait 50 seconds. At 250 milliseconds, they wait 12 seconds. At 50 milliseconds, the experience feels instant.

This is the 50-Query Problem, and it explains why most production AI agents feel slow, expensive, or unreliable.

After implementing real-time AI systems for enterprise customers, I have learned that the constraint is rarely the model - it is the data infrastructure feeding the model. Your AI ceiling is your data infrastructure ceiling.

This talk explores the infrastructure decisions that separate demo agents from production agents:

> The Query Multiplication Effect - Why agents amplify every millisecond of database latency
> Specialist vs Generalist Agents - Why scoped agents beat "all-knowing" agents for accuracy and debugging
> The Business Value Stack - Fresh data to real-time context to AI reasoning to user insight
> Query Transparency - Audit trails for compliance, debugging, and user trust

The 50-Query Problem: Observability When AI Agents Multiply Your Database Calls

Your monitoring dashboards are about to break. When AI agents replace single-query UIs with multi-step reasoning, a single user request can trigger 50 database queries. At 1 second per query, that's 50 seconds of wait time. Your users will leave, your SLAs will breach, and your existing observability stack won't tell you why.

Problem: Agentic AI introduces nondeterministic system behavior that traditional APM tools weren't designed for. The same request can take 3 different paths, make varying numbers of database calls, and produce unpredictable latency patterns. Without new observability approaches, debugging becomes guesswork.

Solution: I'll share patterns from enterprise AI deployments that treat agent observability as a first-class concern

Getting AI to Work While You Enjoy Your Life

I spent a Saturday morning skiing with my daughters instead of sitting in front of my computer. By noon, I had reviewed 5 hours of AI-generated work completed during my slope time. This wasn't magic; it was a framework.

Problem: AI productivity tools promise 10x output but deliver chaos. Engineers spend more time prompting, debugging AI hallucinations, and fixing context drift than they save. The problem isn't the AI; it's treating AI like a conversation partner instead of an executor.

Solution: The Architect-Executor Framework separates human thinking from AI implementation. Humans architect (define specs, acceptance criteria, boundaries). AI executes (implements within those boundaries). Humans review and iterate.

This pattern mirrors how successful DevOps teams work: clear specifications, automated execution, human verification. It's the same mental model applied to AI-assisted development.

Unlocking Success: How to Choose the Right AI Use Case

Are you eager to leverage AI but unsure where to begin? This session will equip you with proven frameworks and strategic insights to identify the most impactful AI use cases. Discover how to prioritize and invest in projects that deliver real value, ensuring your AI initiatives align with your business goals and drive meaningful outcomes.

Orchestrating AI Agents for Structured and Unstructured Data

In this session, we'll explore deploying AI agents to seamlessly query and analyze both structured and unstructured data. Utilizing practical demonstrations, attendees will learn how to set up services that enable AI agents to retrieve and process information from diverse data sources. We'll demonstrate building an AI agent that integrates data from multiple tables and documents, delivering accurate and comprehensive responses. This talk is designed for data professionals and AI enthusiasts aiming to enhance their data interaction capabilities

LLMs: The Right Context is All You Need

Large Language Models (LLMs) are powerful, but their effectiveness depends heavily on one key ingredient—context. In this session, we’ll explore real-world use cases where better context management significantly improved accuracy, reduced latency, and delivered more trustworthy answers. From parsing 500-page lease agreements to generating precise SQL queries over messy databases, you’ll learn practical strategies for chunking, retrieval, and semantic enrichment that can make or break your LLM implementation. If you're building or scaling GenAI solutions, this talk is for you.

Doneyli De Jesus

Principal Architect @ ClickHouse

Montréal, Canada

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top