Speaker

Laura Santamaria

Laura Santamaria

Lead Developer Advocate at Dell Technologies

Austin, Texas, United States

Actions

As a Lead Developer Advocate at Dell Technologies, Laura Santamaria loves to learn and explain how things work to bridge the gaps in engineering disciplines. She is a cohost of the Cloud Native Compass podcast and was the curator for A Minute on the Mic, a cohost for The Hallway Track podcast, and the host of Quick Bites of Cloud Engineering. In the past, she’s worked in many roles, including as a software developer, an ops specialist building and running platforms, a part-time CTO, a technical writer, an editor, a science educator, and a literacy and education researcher. In all of these roles, she spent time building, maintaining, and observing communities of practice. That experience with communities, coupled with both a love of science and data from her degree work in earth and atmospheric sciences and a love of education, led her to developer advocacy.

As a community member, she co-hosts multiple meetups in the Austin, Texas, area, including Cloud Austin. For many years, she taught Python for the Women Who Code Austin meetup and co-hosted Austin DevOps, as well. She is an organizer for DevOpsDays Austin, DevOpsDays Texas, and PyTexas, all community-run conferences. For the past few years, she has been a returning program committee member for Open Source Summit’s Cloud Open track that explores cloud infrastructure and cloud apps. Outside of tech, Laura runs (slowly), reads (a lot), takes pictures of cars, plays with her dogs, and watches clouds—the real kind.

Area of Expertise

  • Information & Communications Technology

Topics

  • DevOps
  • Infrastructure as Code
  • Infrastructure
  • Cloud Native Infrastructure
  • Infrastructure as code (IaC) security and policy-as-code
  • Cloud & Infrastructure
  • Communities of Practice
  • Tech Communities
  • Developer Communities
  • Empowering Communities
  • Building Communities
  • open source communities
  • Communities
  • python
  • python3
  • Developer Relations

Communication and Empathy across Remote and Distributed Teams

Over the past year and a half, the world went a bit sideways, and as a result, many companies went fully remote and are considering staying remote in some fashion. Some folks suddenly found themselves working remotely for the first time or working with teammates who have never worked remotely before, and that situation likely won’t change. However, working remotely means adopting different communication strategies and finding ways to empathize with coworkers without socializing or discussing things in person, and many folks still attempt to engage the same way as when they congregated in an office every day. The difficulties for remote and distributed teams around communication and empathy get even worse when tossing in high-emotion situations like incidents and being on-call. Let’s explore
* how communication and empathy across remote and distributed teams is different than for in-person teams,
* why these differences arise, and
* what teams and individuals can do to improve communication and to create and maintain empathy on remote teams, from using communication frameworks to helping others learn how to empathize with you.

Mayday, Mayday: Starting a job with a production incident

Ever walk into a job and straight into a fire? I have! I contend that starting out fighting a fire on production with your new teammates is one of the best ways to learn about your new systems and settle into a team. There’s nothing like drinking from the firehose. Let’s talk about why I think that way, how to do it right, and how to get the most out of your first production incident on the job.

I Borked Prod: Initial troubleshooting of distributed systems in 5 minutes or less

Prod just fell over. You have 5 minutes before the screaming starts. What do you do? Let’s lay out the path to getting moving without panicking, hopefully before your boss appears at your shoulder.

If you haven’t ever been in a position where production fell over while you’re on call, you eventually will be. One great way to avoid just sitting there in terror is having a rough framework of what to do in your mind before you need it. This step-by-step troubleshooting process has worked for me countless times, and I’d like to share it with all of you.

Righting a Sinking Ship: Troubleshooting systems with available data

Ever been stuck with a system that just can’t heal? A system that falls over? Working with modern systems, especially containerized systems distributed across many clouds, can be difficult and frustrating for anyone on call when something goes wrong. I’ve certainly be there. Let’s dig into where you can gather data from a broken system, how to get data if you’re not lucky enough to have logs, how you can figure out what’s happening using that data, and how best to act on that data. We’ll also explore common trouble spots that might be hidden in that data for you to find. Finally, we’ll take a look specifically at common issues with containers and when they’ll appear so they’re easier to spot.

What Cloud Engineers Could Learn from Clouds

Let’s talk about clouds. No, not that kind of cloud with other people’s computers. Let’s talk about the kind of clouds that kids draw as fluffy puffballs. Did you know that clouds are complex, chaotic systems that have a lot in common with our tech clouds of today? Weather systems and planetary atmospheres are fascinating explorations in understanding, modeling, and (attempting to) predict complex chaotic systems in an applied setting, and the human race has been studying clouds for millions of years. We’ve only started studying them from the perspective of mathematics and physics in recent times—a timeframe measured in hundreds of years. In comparison, the modern cloud-based computing world has only been around for a “short” time. There’s a lot we can learn from the world of meteorology and atmospheric science, so let’s go on a tour through our atmosphere and beyond, and see what we can learn about understanding, modeling, and predicting complex systems from these ephemeral masses of vapor.

More Difficult than Rocket Science: Chaos and Distributed Systems

Did you know that meteorology is more difficult than rocket science? It’s true! There’s more chaos in the systems in meteorology than you find in the vacuum of space*. Similarly, DevOps is also harder than rocket science because of the chaos of human interactions colliding with the everyday chaotic flow of data across systems. When we add in the complexity of distributed concurrent systems, we add a whole new level of chaos to the systems we interact with. How can we understand these systems from a holistic standpoint? How can we troubleshoot them better when an incident is occurring?

Let’s talk about the chaos inherent in DevOps, and maybe we can learn some strategies for handling that chaos from the fields of meteorology and earth sciences.

*Take this meteorology joke with a grain of salt; we can debate how much chaos there is in the vacuum of space some other time.

Tanukis with Hammers: The dangers of third-party tooling

It’s lovely to spend time decorating your house and grounds while tanukis come in mysteriously and upgrade your house when you’re not watching. It’s also lovely to build on top of other tooling so you don’t have to reinvent the wheel. However, what would happen if Timmy and Tommy decided to tell Mr. Nook that they’d rather watch for shooting stars one night? Do you understand how your third-party tooling works, or, more importantly, why it was built in the first place? Many platforms have been built on third-party tooling without thinking about the full implications of what happens when different systems might go down, and those platforms fall apart when there’s an issue because the folks who built them don’t understand how that third-party tooling they relied on works in the first place. We want to avoid that reckoning, but how? How can we be sure our houses will get upgraded if Timmy and Tommy were to go on strike? Let’s talk about it.

Cultural Confusion: Bridging the gap between initiative and implementation

When talking about a “DevOps transformation,” we talk a lot about changing the company culture. However, what do we mean when we say “culture”? The term itself may seem clear from a high level, but it gets fuzzier and fuzzier the more we attempt to drill down and focus. What might be clear to someone in a leadership position when they talk about changing company culture gets unfocused and confusing as implementation begins further down the chain. As a result of this fuzziness, these transformations can fail to materialize or fail to persist after the initial focus by leadership. Handling that fuzziness throughout the organization is crucial to building a DevOps mindset that sticks. Let’s explore how to find a path forward together.

Leaving the Nest: Guidelines, guardrails, and human error

When we talk about reliable systems, we talk a lot about human error. Human error in an incident or a bug report is often treated with a bit of a facepalm reaction. The term masks a lot of scenarios from accidents to exhaustion to everything in between. However, human error helps us understand where our processes failed and how we can prevent the same error from happening again. In short, we need to think in terms of a framework of guidelines and guardrails. In this talk, let’s discuss how guidelines like runbooks and guardrails like automation can help us address the fact that everyone will, at some point, make mistakes.

Getting to DevOps: Musings of a DevRel on communities of practice

When thinking about a DevOps culture, we all come to the concept with different ideas, experiences, and knowledge, and we all are part of different communities of practice in both our workplaces and our general communities. Similarly, our organizations contain multiple points of view and multiple communities of practice. However, we’re all dealing with complex systems for which a change in perspective is needed to build something bigger than ourselves or our single point of view. To get to a true “DevOps transformation” in any organization, we have to make a fundamental shift in the ways our internal communities of practice think and bring together groups of people to drive new ideas. Come chat with Laura as she shares her experiences building communities and studying systems to explore how you can influence your organization’s shift to DevOps.

Laura Santamaria

Lead Developer Advocate at Dell Technologies

Austin, Texas, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top