Speaker

Mihai Criveti

Mihai Criveti

Distinguished Engineer for AI at IBM

Actions

Mihai Criveti is a Distinguished Engineer for AI Agents at IBM, where he shapes Agentic AI standards across IBM Consulting and leads development of Agents for Advantage, a GenAI platform powering productivity for over 160,000 consultants. He created Context Forge, IBM’s open source Model Context Protocol (MCP) Gateway and Registry, which enables secure, observable interoperability between LLM applications and tools by translating REST APIs to MCP and bridging protocols such as stdio, SSE, and streamable HTTP.

Earlier in his career, Mihai led the development of the Scribeflow RAG solution and served as CTO for Cloud Native and Red Hat Solutions at IBM, driving global strategy for open hybrid cloud. He holds Red Hat Certified Architect Level III credentials, with deep expertise in enterprise Linux, automation, container platforms, and hybrid infrastructure.

His work spans platform engineering, AI orchestration, and developer experience, with a strong focus on building open, modular systems that accelerate real-world AI adoption.

Area of Expertise

  • Information & Communications Technology

Topics

  • ai
  • Agents
  • MCP
  • security

Building Scalable AI multi-agent collaboration with Langchain, Langgraph, Crew.AI, and RAG

A deep dive on multi-agent collaboration, ReAct Prompting and advanced LLM Techniques. Design and building safe, trusted GenAI application using multi-agent collaboration. Understanding how popular Agentic frameworks work, and can be combined: Langchain, Langgraph, Autogen, AutoGPT, Crew.AI and more.

Agent orchestration using popular open source language models, model serving and function calling. Including Agentic Retrieval Augmented Generation with Vector Databases or Internet search results in your applications.

Text-to-sql, code generation techniques, and safe execution of AI Generated Code.

Build your own "Chat with Local Files" using Retrieval Augmented Generation

Learn how to build an advanced "Chat with Local Files" application using Retrieval-Augmented Generation (RAG) with a Vector Database. This guide will walk you through the integration of Langchain, ChromaDB, embedding models and local machine learning models to create an intelligent, responsive chatbot capable of understanding and interacting with your data.

During this workshop, you'll setup langchain and chromadb, and build a FastAPI backend + a Streamlit frontend. You'll use embedding models from HuggingF ace and local models running on Ollama. Alternatively, you can use public LLMs with your own key.

We'll ingest a variety of files including PDF, text and DOCX, and discuss how to optimize the LLM prompts for RAG.

You will need the following libraries:
- Ollama (https://ollama.com) with a small model such as phi3 3B, gemma2 2B or access to an LLM via an API key
- An embedding model, such as sentence-transformers/all-MiniLM-L6-v2

To speed up access during the workflow, you can install the recommended libraries: `pip install -U sentence-transformers chromadb sentence-transformers streamlit langchain langchain-community langchain-ollama fastapi jupyterlab python-docx`

Estimated duration: 3 hours.

Mihai Criveti

Distinguished Engineer for AI at IBM

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top