Session

Why Migrate Large-Scale Quantitative Backtesting from Slurm to Argo Workflows?

Slurm and Argo Workflows are among the most popular tools for scheduling and orchestrating parallel tasks in the HPC and cloud-native fields, respectively. However, Slurm has some limitations when facing scenarios that require high concurrency and scalability.
In this talk, I will share some challenges encountered when running large-scale quantitative backtesting tasks (processing more than 4,000 stocks in parallel) on Slurm, such as performance, cost, and flexibility. Additionally, I will explain why Argo Workflows was chosen as an alternative and introduce how to migrate to Argo Workflows to solve these challenges, while ensuring performance and improving the flexibility and scalability of the system. Also, I will share some best practices to run tens of thousands pod in parallel in Argo workflows, e.g. configuration of workflow controller and kubernetes control plane.

Shuangkun Tian

Alibaba Cloud Technical Expert

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top