Session
Moving timeseries data from a Postgres table to AWS DynamoDB for efficiency and profit
Imagine saving performance timeseries data on a Postgres table for thousands of stock portfolios. We quickly ran into performance, efficiency, and cost issues as the number of records increased. Multiple tweaks were made to optimise and speed up the process, but they did not yield a significant performance improvement. Rather than continuing with minor modifications, it was time for a step change.
In this practical talk, you will learn how we moved from storing stock portfolio performance data on a couple of Postgres tables to AWS DynamoDB, making the process more efficient and saving costs along the way. Each portfolio would have a year’s worth of data to render the timeseries correctly. Earlier, the database was consistently hitting 100% CPU
and had multiple thousands of messages in a queue waiting to be processed. After the move, the database resource consumption reduced significantly, and messages were processed much faster.
You will learn practical, real-world tips on choosing the optimal database for your use case to improve efficiency and profitability.
Geshan Manandhar
Senior Software Engineer, Simply Wall St.
Sydney, Australia
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top