Speaker

Viswanathan Ranganathan

Viswanathan Ranganathan

Senior Engineer, Netflix

San Francisco, California, United States

Actions

Viswanathan Ranganathan is a Senior Engineer at Netflix, where he's part of the Delivery Engineering team that powers every service deployment across the platform. His current focus is on building deployment safety and confidence features for Netflix's infrastructure, including the Quiet Period system that protects production during high-stakes moments.
Previously, Viswanathan was a Senior Backend Engineer at Atlassian and Technical Lead at Twilio, where he built scalable systems handling billions of messages per day. He specializes in distributed systems, deployment platforms, and honing the art of knowing when NOT to deploy.

Area of Expertise

  • Information & Communications Technology

Topics

  • distributed systems
  • Distributed Software Systems
  • Distributed Backend Applications and Services
  • Scalable Distributed Systems
  • Distributed Architecture
  • distributed computing
  • Sofware Engineering
  • Software Design
  • Backend Engineering
  • Java & JVM
  • JVM Languages
  • Scala Programming
  • Java and Server-side
  • Advanced Distributed Systems Architecture
  • Microservice Architecture
  • Cloud Native
  • Cloud Computing Architectures
  • AWS Cloud Computing
  • Apache Cassandra
  • Kafka Streams
  • Apache Kafka and Kafka Streams
  • Apache Kafka
  • Netflix Engineering
  • Cloud Technology

Versioned Datasets - Rethinking local in-memory caches

In distributed systems, multiple services rely on the same dataset, which is served through APIs. For relatively small datasets (a few gigabytes), local in-memory caches offer microsecond latency without the complexity of a distributed cache, thereby avoiding the risk of adding another integration when the reward doesn't justify it.
The JVM ecosystem offers excellent caching libraries, such as Caffeine, Guava Cache, and Ehcache, but traditional approaches force operational trade-offs. Cold starts delay deployments. TTL expirations trigger cache stampedes. Full reloads cause memory spikes and GC pauses that disrupt service. Incremental updates require complex change-tracking infrastructure. At the gigabyte scale, these challenges become critical—precisely where in-memory caching matters most.
This talk introduces an alternative paradigm: treating datasets as versioned snapshots with delta-based distribution—applying Git's model to in-memory data. We'll explore Hollow, Netflix's open-source library that implements this pattern, achieving zero-downtime updates and eliminating memory spikes and GC pressure entirely.

From Gatekeepers to Groundwork — Rethinking Human Oversight in Deployment Pipelines

Every deployment pipeline has one. The button. The approval gate. The moment where an engineer pauses, skims a diff, and clicks approve — not because they've evaluated anything specific, but because that's how it's always been done. It's the monkey ladder of modern software delivery, and most teams don't even know they're climbing it.
This talk challenges the assumption that manual deployment gates are a form of caution. In most cases, they are a form of theater — a reflex dressed up as oversight. But the answer isn't to blindly automate your way to production either. The answer is to make human intervention something that has to be earned by the system, not assumed by default.
Drawing from real architectural patterns, this session introduces a deployment trust model built around three tiers — Manual, Assisted, and Autonomous — and the data-driven system that determines where any given service belongs at any given moment. You'll see how deployment health signals, test stability trends, pipeline frequency, and changeset risk annotations can be combined into a maturity score that tells your pipeline orchestrator whether to call a human or just ship.
The goal isn't to remove humans from the equation. It's to make their involvement exceptional — reserved for the moments when the data genuinely calls for it, rather than every time a merge lands on main.
If your team has reached continuous delivery but stalled on the last mile to continuous deployment, this talk is your on-ramp.

Cache me if you can: Decentralize your Distributed Caches with Hollow

Distributed caches are often used for scenarios that don't actually require them. For massive datasets (100's of GB's or more), distributed caches make sense—the data simply won't fit in a single node's memory. However, distributed caches tend to be overkill when working with smaller data sets (100s of MBs to 10s of GBs) that do fit in memory. Additionally, using traditional In-Memory caching libraries creates additional operational challenges, such as cache stampedes during TTL expiration, memory spikes during reloads, and long cold-start times that directly affect deployment velocity.

This talk proposes an alternate, unconventional view: What if we could decentralize our cache while centralizing its preparation? We'll discuss how dataset distribution using Hollow (an open-source project by Netflix) enables applications to serve data from local memory with microsecond access latency while staying perfectly synchronized via delta-based updates.

We'll cover:
- Design trade-offs that make this pattern ideal for GB-scale, read-heavy workloads.
- Delta-based updates that optimize cache reloads/refreshes.
- Zero-downtime updates applied in milliseconds without memory spikes.

Netflix's Quiet Period: The Race Day Rule That Protects Holidays, Live Sports & Cloud meltdowns

As the golden race day rule goes, do nothing new on race day. In other words, don't change what's already working. At Netflix, we follow this same principle through Quiet Period, our automated strategy for protecting production during the moments that matter most.

When live sports events capture enormous global audiences, when holiday viewership breaks all records, and when your cloud provider experiences a meltdown that sends half the internet into chaos—that is precisely when Netflix enacts its Quiet Period. This isn't merely a deployment freeze; it's a form of intelligent governance that poses one essential question: Is this truly the right moment to push something new into production?

In this session, we aim to take you inside Netflix's battle-tested playbook for high-stakes moments. You'll discover how we built systems that automatically protect production across multiple scenarios—from planned holiday peaks to emergency cloud outages. We'll explore the architecture that governs deployments across Streaming, Ads, and Gaming, and how we removed the "trust me, this is critical" problem that plagues every engineering organization under pressure.

You'll learn about our evolution from a manual policy to an intelligent adaptive system that knows when you're racing and refuses to let you experiment.

Every organization has race days. The question is: do you know when yours are, and are you disciplined enough to follow the rule?

DeveloperWeek 2026 Sessionize Event

February 2026 San Jose, California, United States

ACM Fremont Chapter

An engaging session hosted by the ACM Chapter, where industry experts and researchers explore the latest breakthroughs shaping the future of technology. This event will cover cutting-edge advancements in AI, quantum computing, blockchain, cloud-native systems, and next-generation computing architectures. Participants will gain insights into real-world applications, career opportunities, and the impact of emerging technologies on businesses and society.

November 2025 Fremont, California, United States

Viswanathan Ranganathan

Senior Engineer, Netflix

San Francisco, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top