Session

Level Up your Kubernetes Scaling with KEDA

Whether it is a normal workday or Black Friday, service-oriented applications must be able to handle varying loads. This is the only way to ensure that users are provided with a good experience and that costs are kept to a minimum.

Kubernetes offers a way to vary the number of application instances running based on CPU or RAM utilization with the Horizontal Pod Autoscaler. However, modern applications often depend on a variety of components and should be able to respond to external events. These may include new messages in a queue or metrics in Azure Monitor.

As an application developer or operation manager, what do I need to consider to ensure that my application can respond to these events? How can I configure Kubernetes for “scale to 0” to run my application only when needed?

Using Azure Kubernetes Service and KEDA (Kubernetes Event-driven Autoscaling), this session will show with practical examples how to create and configure autoscalers to respond to external events and scale applications in Kubernetes accordingly.

Wolfgang Ofner

Freelance Cloud and Software Architect, Toronto, Canada

Toronto, Canada

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top