Manuel Alejandro Ledezma Falcon
Sr. QA Automation Lead & Content creator
Barcelona, Spain
Actions
Manuel Ledezma, known in the tech community as Tester Testarudo, is a software testing and automation specialist with a strong commitment to delivering high-quality and reliable software. Over the past years, he has focused on mastering QA practices and automation strategies, working in agile and fast-paced environments.
He has contributed to leading companies such as Mediktor, AXA, Telecom Argentina, Newfold, and Mojo Marketplace, where he implemented scalable testing solutions that improved product stability and user experience.
Manuel currently serves as the QA Automation Lead at Mediktor in Barcelona, Spain, where he leads automation initiatives to ensure robust and impactful digital products.
Beyond his professional work, Manuel empowers the QA community through Tester Testarudo, his educational project dedicated to helping newcomers learn testing in a clear, practical, and accessible way.
Area of Expertise
Topics
Strategies for Transforming Quality in Times of Crisis
Software testing is currently facing a major crisis, especially in environments where a quality culture is not established. We will address how to identify and manage challenges when testing lacks a solid quality foundation. Additionally, we’ll explore how to select testing and automation tools based on available human and financial resources to maximize impact and efficiency.
IA Justa desde el QA: Cómo el Tester Puede Prevenir Sesgos Algorítmicos
En esta charla interactiva descubriremos cómo los sistemas de inteligencia artificial pueden verse afectados por sesgos ocultos en los datos o en los modelos, afectando decisiones críticas como contrataciones, aprobaciones financieras o diagnósticos médicos. Aprenderás qué es el sesgo algorítmico, cómo detectarlo desde el rol de QA, y qué estrategias puedes aplicar para mitigarlo.
Incluye una actividad práctica de análisis de datasets reales para identificar patrones discriminatorios y reflexionar sobre el rol ético del tester en la era de la IA. Una sesión esencial para profesionales que quieren garantizar que la tecnología sea justa, segura y transparente.
Machine Learning for Testers: Train, Evaluate, and Detect Bias in AI Models
This hands-on workshop is designed specifically for software testers interested in stepping into the world of Machine Learning (ML) with practical, easy-to-follow exercises using Python. Attendees will learn the fundamentals of ML from a QA perspective, including how to train a simple spam classifier using real-world data, understand what data ML models require, and most importantly, how to effectively test these models.
The session will also cover critical ethical aspects such as identifying and mitigating bias in datasets and model predictions. Participants will engage in building test cases for AI models, gaining skills essential for ensuring AI systems are fair, robust, and trustworthy. No prior experience in programming or data science is required—just curiosity and a tester’s mindset.
This workshop offers unique value as it bridges traditional testing with emerging AI technologies, a topic not widely covered in standard QA training, making it highly relevant for the growing AI testing community.
Swiss Testing Day 2026 Sessionize Event Upcoming
Pycon Ireland 2025 Sessionize Event
Nerdearla España 2025 Sessionize Event
Nerdearla México 2024 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top