Speaker

David Murphy

David Murphy

28 years across MySQL, Oracle, MongoDB, Elasticsearch, and beyond. Still shipping code. Still breaking things in staging, not production.

Actions

David Murphy is a Senior Database Engineer at Motion, the creative advertising analytics platform, where he owns database architecture, SRE, and cloud infrastructure across the full stack. With nearly 30 years working across MongoDB, MySQL, Oracle, Elasticsearch, Kafka, and RabbitMQ, he has built and stabilized data systems at every scale — from early DBaaS at ObjectRocket to critical aviation infrastructure at Aer Lingus, from game platform architecture at Electronic Arts to leading the DBRE function at Udemy as principal engineer. He served as Practice Manager for MongoDB at Percona and is a MongoDB Master Alumni and open source contributor across multiple database technologies. David builds his talks around production-proven solutions — and tries to publish the code to go with every one of them.

One App, Three Databases: Why Motion uses MongoDB, Elasticsearch, & PostgreSQL — & keeping them all

When AI tooling defaults to PostgreSQL and every new ORM ships a Postgres driver first, it is tempting to ask: should we just consolidate everything into Postgres? This talk answers that question honestly — with real architecture decisions, real migration cost analysis, and a frank look at why polyglot persistence is often the right call, not technical debt to be eliminated.

Motion is a creative advertising analytics platform. Marketers use it to understand ad performance across Meta, TikTok, YouTube, and LinkedIn and to research and analyze creative content at scale. We run three purpose-built databases in production.

MongoDB handles the operational core. Ad platform data is the problem case for rigid schemas: Meta, TikTok, YouTube, and LinkedIn each model ad units, insights, and breakdowns differently. The document model absorbs that schema variance without migrations every time a platform ships a new field shape. Aggregation pipelines and change streams power the core sync and analytics architecture.

Elasticsearch handles what MongoDB cannot serve economically at scale: faceted aggregations across hundreds of millions of ad insight records, full-text creative search with analyzers and synonyms, and fast metric rollups for analytics dashboards. We sync operational data into Elasticsearch — tools like Monstache make this tractable — to get the best of both: MongoDB's operational flexibility and Elasticsearch's aggregation and search performance.

PostgreSQL anchors our AI workflow builder, a separate product built from scratch. Workflows, branches, nodes, edges, and run history form a graph structure that is inherently relational. Strict foreign keys, auditable run history, and join-heavy queries made Postgres the right tool for this workload. This is not a pgvector story — our memory and retrieval layer uses a managed external service, and the Postgres decision was purely about workflow state modeling.

We will share what it would actually cost to migrate core MongoDB workloads to Postgres: schema redesign for variable ad platform shapes, query rewrites, operational tooling replacement, and the subtler cost of losing aggregation pipeline patterns that power the core product.

The lesson: polyglot persistence is not a liability. It is purposeful specialization — and consolidation has a real price that rarely pencils out.

Key Takeaways:
• The specific workload characteristics that make MongoDB, Elasticsearch, and PostgreSQL each the right choice
• Why multi-platform ad data is a strong case for the document model over relational tables
• The MongoDB → Elasticsearch sync pattern for teams needing analytics performance without migrating operational data
• Real cost categories for migrating away from MongoDB that teams routinely underestimate
• How to justify a multi-database architecture to leadership — and when consolidation actually does make sense
Target Audience: Architects, engineering leads, and senior engineers navigating database strategy decisions — especially teams feeling pressure to consolidate or modernize onto a single database.

Your LLM Will Drop That Table: Making AI-Assisted Database Work Safe

AI coding assistants are now writing queries against your production databases. They will generate an aggregation that scans a collection with 500 million documents. They will suggest a join with no index coverage. They will write a migration that destroys data if run out of order. This is not hypothetical — it has happened, and it will happen to your team.

The root cause is not the AI. It is that the AI has no context about the database it is touching. It does not know which collections hold half a billion documents. It does not know your index strategy. It cannot distinguish between a query that returns in 40ms and one that takes down your production cluster. It writes plausible-looking code with no intuition about what that code will cost at scale.

Most teams responded by adding more code review. That is not enough. Human reviewers miss AI-generated query patterns they have not seen before, and the volume of AI output is already beyond what careful manual review can absorb.

This talk walks through a different approach: giving the AI the context it is missing, at every point in the development workflow where a bad query can still be stopped. We use a single story — a developer asking their AI assistant to build a feature that touches a core collection with hundreds of millions of documents — to show what happens at each stage when the safety layer is absent, and what changes when it is in place.

The patterns are practical, database-agnostic, and implementable without a dedicated DBA.

Key Takeaways:
• The specific categories of AI-generated database operations that cause production incidents — and why LLMs are structurally unlikely to avoid them
• A developer workflow pattern that intercepts dangerous queries before they reach production, at multiple enforcement points
• The single most important constraint to put on any database connection you expose to an AI coding tool
• How to apply the same safety rule at the IDE, PR, and deploy stages without duplicating the logic
Target Audience: Backend engineers, platform engineers, and engineering leads who have adopted AI coding tools and want practical database safety guardrails without slowing teams down.

David Murphy

28 years across MySQL, Oracle, MongoDB, Elasticsearch, and beyond. Still shipping code. Still breaking things in staging, not production.

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top