Manuel de la Peña
Docker, Staff Software Engineer
Docker, Staff Software Engineer
Toledo, Spain
Actions
Manuel is an OSS Software Engineer at Docker where he maintains "Testcontainers for Go". Since 2003, he has held various roles in different parts of the development process, starting in 2003 working for the regional public administration in Castilla-La Mancha (Spain) until 2007, then worked at more traditional consulting firms. In 2011, he transitioned to product-oriented and Open Source companies, where he has served as a support engineer, trainer, and Core Engineer at Liferay, as QA Tech lead at Liferay Cloud. From 2019 to 2022 he was involved in Engineering Productivity at Elastic as part of the Observability product, and since 2022 doing pure OSS at AtomicJar, which was acquired by Docker in Dec' 23. In every job he tries to improve the quality of the software products and processes from the automation and testing point of view.
He has also founded and managed a couple of small web development and systems consulting companies. Additionally, he organises the Google Developers Group in Toledo, Spain (GDG Toledo), where they run monthly discussions about software in its various aspects, serving as a small community outside the bustling Madrid. Manuel has delivered talks at different national and international events too.
Manuel holds a BS in Computer Science (UNED Spain), and a master degree in Research in Software Engineering and Information Systems (UNED Spain). You can find him on Internet as "mdelapenya" everywhere.
Manuel es un ingeniero de software de Open Source en Docker, donde mantiene "Testcontainers for Go". Desde 2003, ha desempeñado varios roles en diferentes etapas del proceso de desarrollo. Comenzó su carrera en 2003 trabajando para la administración pública regional en Castilla-La Mancha (España) hasta 2007, luego trabajó en firmas de consultoría más tradicionales. En 2011, hizo la transición a empresas orientadas a productos y de código abierto, donde ha trabajado como ingeniero de soporte, formador y Core Engineer en Liferay, así como líder técnico de QA en Liferay Cloud. Desde 2019 hasta 2022 estuvo involucrado en el equipo de Engineering Productivity en Elastic, dentro del producto de Observabilidad, y desde el 2022 haciendo Open Source en AtomicJar, empresa adquirida por Docker en diciembre de 2023 En cada trabajo, intenta mejorar la calidad de los productos y procesos de software desde el punto de vista de la automatización y las pruebas.
También ha fundado y gestionado un par de pequeñas empresas de desarrollo web y consultoría de sistemas. Además, organiza el Grupo de Desarrolladores de Google en Toledo, España (GDG Toledo), donde llevan a cabo discusiones mensuales sobre software en sus diversos aspectos, sirviendo como una pequeña comunidad fuera del bullicioso Madrid. Manuel también ha impartido charlas en diferentes eventos nacionales e internacionales.
Manuel es Ingeniero Técnico en Informática de Gestión (UNED España) y posee un máster en Investigación en Ingeniería de Software y Sistemas de Información (UNED España). Puedes encontrarlo en Internet como "mdelapenya" en todas partes.
Links
Area of Expertise
Topics
Choosing the Smallest LLM That Won’t Completely Fail You en
Bigger isn’t always better, especially when it comes to running language models locally. In this session, we’ll explore how to evaluate and benchmark Small Language Models (SLMs) using Go, Docker, and Testcontainers.
You’ll learn how to build a framework in Go that leverages Docker's Model Runner as inference engine to automatically spin up SLMs, run controlled evaluation scenarios, and collect observability metrics. We’ll define an Evaluator Agent that executes a battery of standard prompts across multiple models, an approach that helps you understand performance, accuracy, and resource trade-offs in practical developer setups.
We’ll move from building a reusable evaluation harness to defining and orchestrating prompts as tests for different models. You’ll see how to instrument Go benchmarks with metrics and traces to visualize behavior instantly and make informed decisions. And of course, you’ll walk away with practical insights on selecting the smallest model that won’t fail you.
By the end, you’ll have a repeatable approach for testing and comparing language models.
Warm up your GPUs, but less than you think.
Testing GenAI applications in Go en
The evolution of GenAI applications brings with it the challenge of developing testing methods that can effectively evaluate the complexity and subtlety of responses generated by advanced artificial intelligences.
The proposal to use an LLM as a Validator Agent represents a promising approach, paving the way towards a new era of software development and evaluation in the field of artificial intelligence. Over time, we hope to see more innovations that allow us to overcome the current challenges and maximize the potential of these transformative technologies.
This proposal involves defining detailed validation criteria and using an LLM as an “Evaluator” to determine if the responses meet the specified requirements. This approach can be applied to validate answers to specific questions, drawing on both general knowledge and especialised information. By incorporating detailed instructions and examples, an Evaluator can provide accurate and justified evaluations, offering clarity on why a response is considered correct or incorrect.
In this session we’ll leverage langchaingo to interact with LLMs, and Testcontainers Go to provision the runtime dependencies to use RAG.
Accepted in Gophercon SG 2025
Delightful integration tests in Go applications en es
Dockerized services are an excellent tool for creating repeatable, isolated environments ideal for integration tests. In this session, we’ll look at the Testcontainers libraries which provide flexible and intuitive API for programmatically controlling lifecycle of your service dependencies in Docker containers.
Running databases, Kafka, Elasticsearch, and even cloud technologies, straight from your test code ensures environment config is always up-to-date and consistent during local development and in CI pipelines.
You’ll learn everything necessary to start adding powerful integration tests to your codebase without the headache of managing external service dependencies manually!
Preferred session duration: 30-45 mins
Target audience: Introductory and Overview
Deliciosos tests de integración en aplicaciones Go en es
Los servicios en contenedores Docker son una excelente herramienta para crear entornos repetibles y aislados ideales para pruebas de integración. En esta sesión, veremos la librería Testcontainers para Go, que ofrece un API flexible e intuitiva para controlar programáticamente el ciclo de vida de las dependencias de tus servicios en contenedores Docker en tu lenguaje de programación favorito: Go.
Ejecutar bases de datos, Kafka, Elasticsearch e incluso tecnologías en la nube directamente desde tu código de test asegura que la configuración del entorno esté siempre actualizada y sea consistente durante el desarrollo local y en los flujos de trabajo de integración continua (CI).
Aprenderás todo lo necesario para comenzar a añadir pruebas de integración potentes a tu código sin la molestia de gestionar manualmente las dependencias de servicios externos.
Duración preferida: 30-45 mins
Audiencia: Introducción
Container Days London Sessionize Event Upcoming
ContainerDays Conference 2025 Sessionize Event
DevFest Berlin Sessionize Event
Manuel de la Peña
Docker, Staff Software Engineer
Toledo, Spain
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top