Session
Bridging the Gap: Automating Legacy VM Monitoring with Kubernetes, ArgoCD and Prometheus
In today's cloud-native world, Kubernetes has emerged as the de facto standard for deploying, managing, and scaling applications. However, the journey to Kubernetes is often fraught with challenges, particularly when it comes to integrating legacy systems that are not containerized and often reside on virtual machines (VMs). These legacy systems, which may include databases or other critical infrastructure, present unique monitoring challenges. They cannot be easily migrated to Kubernetes and often exist in an operational limbo, with uncertain decommission timelines. This situation poses a significant hurdle for teams that have already embraced Kubernetes and wish to centralize their monitoring infrastructure using Prometheus.
In this session, we will explore how we successfully revamped the monitoring architecture for legacy VMs using a Kubernetes-based Prometheus stack. Specifically, we'll delve into our innovative approach to automating the monitoring of these legacy systems using Prometheus, Prometheus Operator, and Blackbox exporter, all orchestrated through Ansible and ArgoCD.
We began our journey by recognizing the limitations of our previous monitoring solution, Icinga. While Icinga provided basic monitoring capabilities similar to Nagios, it fell short in terms of scalability, ease of configuration, and integration with modern cloud-native tools. One of the most glaring issues was the lack of autodiscovery features, which required us to manage configurations manually through a Git repository. This manual process was prone to errors, slow, and lacked the flexibility needed for dynamic environments where new VMs could be spun up or down frequently.
Our goal was clear: we wanted a single source of truth for monitoring across both Kubernetes and our legacy VMs. Prometheus, with its powerful query language, flexible data model, and strong community support, was the natural choice. However, integrating legacy VMs into this new Prometheus-based ecosystem required a novel approach.
We leveraged Ansible to automate the deployment of Prometheus exporters on our VMs. These exporters are lightweight processes that collect and expose metrics from the VMs, which Prometheus can then scrape. We also utilized OpenStack's Prometheus Service Discovery feature, which allows Prometheus to dynamically discover VMs based on metadata tags. By defining a standardized tagging convention for our VMs, we could easily identify the technologies running on each machine, whether it was a database, web server, or other legacy application.
To further streamline the monitoring setup, we employed ArgoCD, a powerful GitOps tool, to manage the deployment of Helm charts that contained the necessary ScrapeConfig resources. These resources instruct the Prometheus Operator on how to discover and scrape metrics from our target systems. We also used the Blackbox exporter to perform TCP and ICMP checks, ensuring that even the most basic network connectivity issues were detected and alerted upon.
The result of this architecture is a fully automated, end-to-end monitoring solution for legacy VMs that integrates seamlessly with our Kubernetes-based monitoring infrastructure. This approach not only reduces the operational burden of managing separate monitoring systems but also ensures that no VM is left unmonitored, even as the infrastructure evolves over time.
In this talk, we will provide a detailed walkthrough of our implementation, share lessons learned, and discuss the challenges we faced along the way. Attendees will leave with a comprehensive understanding of how to leverage Kubernetes and Prometheus to monitor legacy systems effectively, as well as practical insights into automating this process using Ansible and ArgoCD.
Augusto Soubielle
Head of Engineering at The Workshop
Madrid, Spain
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top