Containers Kubernetes Docker Continuous Integration Continuous Deployment DevOps & Automation Cloud Native Cloud & Infrastructure Cloud Computing Google Cloud Amazon Web Services Micro Services
Glasgow, Scotland, United Kingdom
David is a Senior Developer Advocate at Equinix Metal and a member of the Kubernetes org and release team.
As a professional technology magpie, David was an early adopter of cloud, container, and cloud-native technologies; crossing the murky waters of AWS in 2008, Docker in 2014, and Kubernetes in 2015.
With an insatiable love for technology, David is always on the hunt to learn and share knowledge with others in fun and exciting ways.
Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters.
Cluster API provides clusterctl, which can be configured with environment variables and allows the generation of Kubernetes manifests that describe your workload clusters.
While this provides a great on-boarding experience, managing and wrangling more YAML isn't something we're all yearning to do.
Fortunately, there's a better way.
Introducing Cluster API bindings for TypeScript, Go, and Python.
In this talk, I'll introduce you to managing Cluster API through your favourite programming languages.
The Cloud Native Computing Foundation, founded by The Linux Foundation, currently oversees the guidance of Kubernetes, Prometheus, OpenTracing, Fluentd, and more. The Cloud Native Computing Foundation describes being "Cloud Native" as three major components:
Each part (applications, processes, etc) is packaged in its own container. This facilitates reproducibility, transparency, and resource isolation.
2. Dynamically orchestrated
Containers are actively scheduled and managed to optimize resource utilization.
3. Microservices oriented
Applications are segmented into microservices. This significantly increases the overall agility and maintainability of applications.
In this talk, I will guide you towards taking your application cloud native, utilising the software available to us today, from the CNCF, and others, covering containers, tracing, logging and service discovery.
Being a developer, programmer, analyst, tester, designer, etc … is hard. We work in an industry that champions the 12+ hour work day; continued learning and open source contributions, but not on the company’s dollar. We’re continually berated with the idea of the 10x developer and so we must work harder, read more blogs, write more code and buy more books … but when will it ever end? Will technology ever stand still long enough to let us all catch up? No. You’ll always be busy, busy playing catch up in a race you didn’t sign up for. Fortunately, there’s another way. Instead of being busy, I can help you be more productive. I will walk you through some of the tools and techniques that I use on a daily basis to, not only, maintain and upgrade my skills in the world of ever changing technology, but, and more importantly, to protect my sanity, be more present and remove stress and fear from my life.
Telegraf is an agent for collecting, processing, aggregating, and writing metrics.
I bet that's the first time you've heard it mentioned though, right?
Lets fix that.
With over 200 plugins, Telegraf can fetch metrics from a variety of sources, allowing you to build aggregations and write those metrics to InfluxDB, Prometheus, Kafka, and many more targets.
In this talk, we will take a look at some of the lesser known, but awesome, plugins that are often overlooked; that allow Telegraf to monitor your cloud native applications. Specically, we'll cover building a lightweight edge processor and sidecar container to provide a fast, reliable, robust, performance enhancing collection pipeline that never drops a metric; and as a bonus - provides application specific probes that understand your application like nothing else.
Lets dive in.
Over the last 10 years, software development has evolved at an extraordinary pace. As we dismantled our bare metal for virtual machines, broke apart monoliths for microservices, and set sail with containers and service meshes; we've had to lovingly embrace Infrastructure as Code (IaC) as a means to increase speed and efficiency, reduce risk, secure processes, and provide consistent and reproducible environments.
With InfluxDB being an integral part of your applications, services, infrastructure, and observability pipelines; it's only fair that we provide you with the same IaC primitives you're already accustomed to. Right?
In this session, we'll introduce you to pkger; InfluxDB's command line tool to apply declarative manifests to configure your InfluxDB instance.
From zero to awesome, in 88 seconds.
Time Series is everywhere. It is all around us. Even now, in this very room. It's there when you look out your window, or when you turn on your television. It's there when you go to work, when you go to church, when you pay your taxes.
We often get so caught up in monitoring metrics from our EC2 instances, microservices, and service meshes; that we forget that there's plenty of time series before our code ever leaves our laptops: The Git repository.
In this session, we'll walk through extracting time series data from Git, storing it in InfluxCloud, and building an understanding of our codebase, team, and commit habits.
Let's Git Started
_ A separate ticket is required to attend this full-day training taking place May 19: https://tek.phparch.com/register _
Kubernetes, the flagship project from the Cloud Native Foundation, has become the de-facto standard for running our container workloads.
Unfortunately, Kubernetes is a fast-moving, ever-evolving, sea of complexity. From Pods to Deployments, ConfigMaps to Secrets, and PersistantVolumeClaims to StatefulSets; this training will get you on-course.
In this full-day training class, David will walk you through a series of labs that will teach you everything you need to know to take your container based application and deploy it as a self-healing, redundant and resilient application on top of Kubernetes.
Let's set sail.
Time-Series has been the fastest growing database category, rated by DBEngines, for over 2 years; yet, less than 15% of organisations store their time-series data in a time-series database. Do you?
One could, accurately, say that time-series data is as old as the universe; but it wasn't until the mid-19th century that the first article was published on the concept: A Comparison of the Fluctuations in the Price of Wheat and in the Cotton and Silk Imports into Great Britain by J. H. Poynting (March 1884).
Time-Series data is so natural and common that you actually consume, evaluate, and utilise it everyday; when you're:
- Paying for your morning coffee
- Sighing at the "Delayed" notice on your commute
- Ploughing through your email inbox
In this talk we will look at the different types of time-series data and how to use that to drive observations, understanding, and automation.
Most data is best understood in the dimension of time, lets see why.
The advent of Cloud Native architectures means our applications have become smaller and simpler, without any catches; right?
Migrating to microservices removes the complexity that comes with monolithic applications, allowing for faster builds, easier deployments, simpler domains, and more slices of pizza per team member; but where does all that complexity go? 🧐
Our networks and infrastructure must mature and adapt to support our new distributed architecture. As we race to adopt containers, schedulers, and service meshes; we must remember to measure, monitor, and alert on our new problem domain.
In this workshop we will look at the challenges of monitoring distributed systems and how we can leverage time series databases for logging, metrics, tracing, and alerting.
This workshop will use InfluxDB (my employer) as the monitoring platform.
Telegraf is an agent for collecting, processing, aggregating, and writing metrics.
With over 200 plugins, Telegraf can fetch metrics from a variety of sources, allowing you to build aggregations and write those metrics to InfluxDB, Prometheus, Kafka, and more.
In this talk, we will take a look at some of the lesser known, but awesome, plugins that are often overlooked; as well as how to use Telegraf for monitoring of Cloud Native systems.
Not all developers like containers.
I know, I was shocked too. However, containers are essential to providing dev/prod parity that allows us to minimise bugs in production, deploy single artefact to thousands of machines, and eat all of the disk space on your laptop; so what do we do?
This talk introduces the DShell Pattern, a pattern I've been using for many years to provide simple, flexible, and documented tooling for projects in any language; while facilitating native and container native development workflows.
Lets DShell up
For better or worse, the advent of container technologies has led us down the path to Cloud Native architectures and Kubernetes. Unfortunately, this means that our laptops no longer have the resources available to develop and test our systems as a whole.
Ever heard the phrase "What is old will be new again"?
Introducing ... The Shared Development Server 2020™️
In this session, I will introduce you the different tools available to you for developing your microservices against the Kubernetes API. You will leave this session with an understanding of the tools and patterns that allow you to develop, test, and deploy your applications on your new shiny dev server.
Lets have some fun.
In 1989, the Law of Demeter was documented as a best practice for writing software. Not really a law, it is a design guideline that applies the principle of least knowledge to the softwate we write. To be more succinct, the law was wants us to write loosely coupled classes, functions, modules, and services.
A common challenge I hear when talking to people, teams, and organizations adopting GitOps is: environments. How do I deploy my workloads in configurations that scales appropriately for the constraints and requirements of the environment its running within.
In this talk, I will take a look at how we structure and handle GitOps when deploying across multiple environments and see what we can learn from a law defined 30 years before GitOps was coined.