Speaker

Vicente Herrera

Vicente Herrera

Principal Consultant at Control Plane

Alcalá de Guadaira, Spain

Actions

Principal Consultant at Control Plane, focusing on Kubernetes and AI cybersecurity for fintech organizations. Core member of AI Readiness Group in FINOS, collaborating in defining security risks, controls and mitigations. Lecturer at Loyola University in Seville for the Master's program in Artificial Intelligence. Author of "Building intelligent cloud applications" published by O'Reilly about leveraging Azure serverless and machine learning services. Regular speaker at international conferences.

Area of Expertise

  • Information & Communications Technology

Topics

  • cybersecurity
  • Kubernetes
  • blockchain
  • Machine Learning
  • Artificial Intelligence

Invisible Infiltration of AI Supply Chains: Protective Measures from Adversarial Actors

Malicious human and AI actors can infiltrate AI supply chains, compromising the integrity and reliability of the resultant AI systems through training data tampering, software or model backdoors, model interference, or new runtime attacks against the model or its hosting infrastructure.

This talk examines the importance of securing the data, models, and pipelines involved at each step of an AI supply chain. We evaluate the efficacy of emerging industry best practices and risk assessment strategies gathered from the FINOS AI Readiness Working Group, TAG Security Kubeflow joint assessment, and case studies with air-gapped and cloud-based AI/ML deployments for regulated and privacy-protecting workloads.

In this talk, we:
- threat model an AI system, from supply chain, through training and tuning, to production inference and integration
- implement quantified security controls and monitoring mechanisms for an AI enterprise architecture
- mitigate the risks associated with adversarial attacks on AI systems
- address compliance and regulation requirements with actionable remediations
- look to accelerate AI adoption while balancing minimum viable security measures

Securing your AI project: from guidelines to practical implementation

When leveraging AI in financial services, there are new technologies and security challenges to tackle in addition to the usual DevOps and scaling considerations.

As now exists several sources of information for AI security best practices, we are not left on our own. But how to practically apply them in your project is something not so well described.

In this talk, Rowan Baker and Vicente Herrera will explain how to start with a security guideline recommendation like the “Simple Governance Framework” from the FINOS AI Readiness Group, and apply it to a real project implemented with Helix and FluxCD.

They will show how to transform the recommendations into real actions and how specific controls or mitigations have been chosen and implemented in a practical way. Some of the approaches may be conventional but important, like those regarding supply chain security, others will be more AI specific and novel.

They will also show how the guidelines may not cover specific implementation, how to extend the coverage and contribute back so others benefit from the knowledge you have gained.

Securing GenAI in Finance: Practical Governance... but for real

In finance, leveraging AI can offer tremendous benefits, but it also has high risks that demand security excellence. Several regulations, classification of threats and risks, mitigation guidelines, and best practices have been created to assist in that.

Yet, despite these well-intentioned recommendations, they leave us with the question: How exactly do we implement these measures?

During this session, Vicente Herrera will present a practical security governance architecture for GenIA, focusing on financial institutions and open source. Through a model project, he will demonstrate threats, aligning them with the OWASP Top-10 and MITRE ATLAS classifications. Then, he will show how open-source tools and techniques can effectively mitigate these risks. Live demonstrations will showcase the consequences of implementing or neglecting these mitigations, where they must be in place, and why they are important.

Open Source Tools to Empower Ethical and Robust AI Systems

In this talk, Vicente Herrera will show us some open source tools for evaluating and securing AI models that are essential to building responsible AI systems. He will present an ontology explaining where each tool can assist in these tasks.

He will show tools like Garak, that helps identifying undesirable behaviors. LLM Guard and LLM Canary, providing detection and prevention of adversarial attacks and unintended data disclosures. Promptfoo, that optimizes prompt engineering and testing, leading to more reliable and consistent AI outputs.
For adversarial robustness, Counterfit, the Adversarial Robustness Toolkit, and BrokenHill provide solutions to assess AI models against malicious threats. Regarding fairness and compliance, AI Fairness 360 and Audit AI are important to understand how models can be just and accountable.

The final goal is being able to choose a model not only because how big ir is or good a knowledge evaluation score it has, but also about how robust and fair it is.

Future open source LLM kill chains

Several mission-critical software systems rely on a single, seemingly insignificant open-source library. As with xz utils, these are prime targets for sophisticated adversaries with time and resources, leading to catastrophic outcomes if a successful infiltration remains undetected.

A parallel scenario can unfold for the open-source AI ecosystem in the future, where a select few of the most powerful large language models (LLM) are repeatedly utilised, fine-tuned for tasks ranging from casual conversation to code generation, or compressed to suit personal computers. Then, they are redistributed again, sometimes by untrusted entities and individuals.

In this talk, Andrew Martin and Vicente Herrera will explain methods by which an advanced adversary could leverage access to LLMs. They will show full kill chains based on exploiting the open-source nature of the ecosystem or finding gaps in the MLOps infrastructure and repositories, which can lead to vulnerabilities in your software.

Finally, they will show both new and existing security practices that should be in place to prevent and mitigate these risks.

Bringing light to risks lurking in the black boxes of AI models

A tsunami of Generative Artificial Intelligence and Large Language Model applications are changing technology and the world as we know it.

In this talk, Vicente Herrera employing his expertise will show us the many security risks that can be associated with AI projects, specially when using open source. Old threats persists, but new ones lurks in the dark.

He will point his torchlight to reveal them and

But we are not alone in this fight, as he will teach us about the guidelines, best practices, standards and legislation at our disposal to make sure this AI future that comes is secure for everyone.

Introduction to Cloud Native Compliance and Benchmarks

A 101 about compliance standards (PCI, SOC2, HIPAA, NIST 800-190, NIST 800-53, GDPR, ISO 27001, FedRamp) and benchmarks (CIS). What do you need to know as a DevOps and DevSecOps to set up and maintain a compliant cloud-native environment, including cloud assets, containers and clusters.Takeaways from the session:
1) You will learn about requirements from your environment, the security measures you have to implement, and information about them you have to produce to be compliant.
2) You will understand the differences between each compliance standard, and when each one is useful.
3) For some compliance controls that have an abstract definition, we will explain how to translate the requirements to specific cloud-native technologies.

Secure AI Summit (powered by Cloud Native) North America 2024 Sessionize Event

June 2024 Seattle, Washington, United States

State of Open Con 24 Sessionize Event

February 2024 London, United Kingdom

Keep your bins in the cage, Falco is watching - BSides Berlin 2020

GTFOBins (Get the f*** out of binaries) project collects functions of Linux binaries that can be abused and exploited in different ways. Let’s analyze some interesting patterns and concrete examples, then learn how we can detect and respond to these threats using the Falco Runtime Security engine.

The GTFOBins repository is a very interesting collaborative project that collects legitimate functions of Unix binaries that can be abused for malicious usage like: * Getting out restricted shells * Escalate or maintain elevated privileges * Transfer files * Spawn bind and reverse shells * Facilitate the other post-exploitation tasks …

Although the usage is very well documented, the explanation and details are often not straightforward. We will go one step further by analyzing some of these examples, grouped by similar function patterns, and explaining how they work.

Then, we will talk about Falco Cloud-Native Runtime Security, its features, and learn how to create some Falco rules to detect these kinds of threats.

February 2022 Berlin, Germany

Cloud Native Days with Kubernetes Sessionize Event

August 2021

Detectando cryptominado con Falco (Spanish) - Kubernetes Community Days

Detecta cryptominado en tu infraestructura con Falco - Vicente Herrera, Sysdig e Iñaki Respaldiza, Okteto
Recientemente GitHub y GitLab están sufriendo abusos en sus servicios de CI para realizar cryptominado, lo que va a cambiar el panorama de los servicios CI. Cuando se tomen medidas efectivas, estos mineros buscarán nuevos objetivos… ¿Estás preparado para detectar si eres su próxima víctima? En esta charla haremos una introducción a Falco, la herramienta de detección de amenazas en ejecución open source de la CNCF, cómo funcionan sus reglas, y las últimas novedades que incluye. Luego veremos una aplicación real para detectar cryptominado en clusters de Kubernetes de gran volumen en producción, cómo se ha implementado, qué dificultades ha habido que superar, y cómo ha servido para evitar sucumbir a estos comportamientos maliciosos.

July 2021 Alcalá de Guadaira, Spain

Falco kernel and Kubernetes security (Spanish) - Hack Madrid

En este taller Vicente Herrera nos mostrará de forma práctica cómo funciona Falco, la herramienta open source de seguridad a nivel de Kernel y kubernetes, explicando particularidades de su instalación, la librería de reglas que incorpora así como la programación de nuevas reglas.

July 2020 Madrid, Spain

From DevOps to DevSecOps (Spanish) - HackMadrid/Codemotion

En esta charla Vicente nos mostrará diversos conceptos del estado del arte de la monitorización y seguridad, basado en su experiencia, y cómo ambos conceptos forman dos caras de un único concepto de gobernanza digital:
- DevOps, todavía hay que explicarlo.
- Estrategias multi-cloud e hybrid cloud.
- Monitorización: Prometheus y PromQL, estándares de facto
- Escalando Prometheus.
- Prevención vs Protección. ¿Són diferentes?
- Cubriendo las bases: Pipelines CI/CD, admission controller, runtime security, network policies
- Container & Kubernetes security benchmarks
- Compliance and security standards: PCI, NIST, SOC2, HIPAA

July 2020 Madrid, Spain

Detecting Anomalous Activity in Rancher with Falco - Rancher Labs

Kubernetes Masterclass, with Pawan Shankar and Vicente Herrera

April 2020 Alcalá de Guadaira, Spain

Vicente Herrera

Principal Consultant at Control Plane

Alcalá de Guadaira, Spain

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top