Speaker

Matteo Combi

Matteo Combi

Specialist Solutions Architect, Application Platform - Red Hat

Milan, Italy

Actions

I'm an Application Platform Specialist Solution Architect.
Before joining Red Hat I've been Solutions Architect with more than 15 years of experience in the IT world, mainly in FSI. I've hosted lot of Red Hat Roadshow events and I'm a Red Hat Summit Connect speaker.
Currently I'm interested in connecting Applications and AI

Area of Expertise

  • Information & Communications Technology
  • Real Estate & Architecture

Topics

  • Application Architecture
  • Application Development
  • Applied Generative AI

Come essere sviluppatori al passo con i tempi nell'era di Agenti AI e server MCP

L'intelligenza artificiale generativa può aumentare la produttività degli sviluppatori fino al 45%, ma come?

In questa sessione, Matteo e Natale analizzeranno lo stato dell’arte dello sviluppo assistito dall’AI e l’uso dei Large Language Models (LLM) per lo sviluppo locale, per poi proporre un percorso pragmatico verso la programmazione Agentic AI utilizzando framework open source come Llama Stack e protocolli aperti come MCP.

Al termine della sessione, i partecipanti avranno una visione più chiara di come l’AI possa supportarli nelle attività quotidiane nel mondo enterprise grazie a soluzioni AI locali e open source, anche in contesti privati e di sovranità digitale.

Improving developer productivity with Podman Desktop Ai Capabilities

Testing a LLM model on your local machine can be challenging, especially if you're unsure where to begin. With the Podman Desktop AI extension, you can effortlessly run a model along with all necessary components—directly on your laptop, quickly and without any cost

Improve AI inference (serving models) with kServe and vLLM

Red Hat integrates and supports both kServe and vLLM in its MLOps Platform, OpenShift AI. In addition, Red Hat's engineers actively contribute on kServe and vLLM upstream projects everyday.

In this session, we'll talk about:
- brief intro to Red Hat OpenShift AI, describing at high-level its components, all coming from open source projects.
- how KServe fits in OpenShift AI. Benefits of kServe as a model serving platform
- one step further, getting into how choosing vLLM and kServe as the runtime for LLMs and its model serving platform can help
- faster inference and optimized resource consumption with techniques such as Continuous batching, pagedAttention, speculative decoding
- further optimized resource consumption with LLM quantization thanks to vLLM's library LLM Compressor.

Matteo Combi

Specialist Solutions Architect, Application Platform - Red Hat

Milan, Italy

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top