Session

Cache me if you can: Decentralize your Distributed Caches with Hollow

Distributed caches are often used for scenarios that don't actually require them. For massive datasets (100's of GB's or more), distributed caches make sense—the data simply won't fit in a single node's memory. However, distributed caches tend to be overkill when working with smaller data sets (100s of MBs to 10s of GBs) that do fit in memory. Additionally, using traditional In-Memory caching libraries creates additional operational challenges, such as cache stampedes during TTL expiration, memory spikes during reloads, and long cold-start times that directly affect deployment velocity.

This talk proposes an alternate, unconventional view: What if we could decentralize our cache while centralizing its preparation? We'll discuss how dataset distribution using Hollow (an open-source project by Netflix) enables applications to serve data from local memory with microsecond access latency while staying perfectly synchronized via delta-based updates.

We'll cover:
- Design trade-offs that make this pattern ideal for GB-scale, read-heavy workloads.
- Delta-based updates that optimize cache reloads/refreshes.
- Zero-downtime updates applied in milliseconds without memory spikes.

Viswanathan Ranganathan

Senior Engineer, Netflix

San Francisco, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top