
Marc Plogas
Herding AI Cats, Before It Was Cool.
Berlin, Germany
Actions
Marc Plogas discovered his passion for technology at the age of six when an Atari sparked his interest in programming. He holds a Master's in Computer Science and spent nearly a decade freelancing as a software, mobile, and test engineer, as well as a solution architect, before joining Microsoft. At Microsoft, he played an instrumental role in pioneering technologies such as Windows Mixed Reality, Desktop Bridge, and WinUI 3, and later advised startups worldwide on optimizing their cloud architectures with a focus on reliability, scalability, cost efficiency, and security. Now embarking on a new chapter, Marc eagerly shares his extensive expertise in IoT, AI, ML, and Mixed Reality through engaging public talks and workshops.
Area of Expertise
Topics
Herding AI Cats: Semantic Kernel Multi-Agents Scenarios.
Semantic Kernel: Beyond the "Hello, World."
This deep dive gets elbows-deep into the framework for serious LLM, plugin, and agent orchestration. We'll tear down a multi-agent software dev scenario to expose advanced techniques in orchestration, planning, and integration. Expect real code, real problems, and how to beat Semantic Kernel into submission for complex tasks. If you want the advanced stuff, this is it.
Semantic Kernel Agents: Who Needs Developers Anyway?
Semantic Kernel is a framework designed to facilitate the integration and orchestration of large language models, plugins and agents in AI applications. In this session, we start with a brief overview of Semantic Kernel core concepts and plugins followed with an in-depth description of Semantic Kernel Agents. To illustrate the concept of multi-agent scenarios and benefits of multiple AI models, we'll present a collaborative demo to streamline and enhance the software development process.
From RAG to Riches: The Evolution of Retrieval-Augmented Generation
Traditional RAG approaches have transformed the way we integrate content retrieval with generative models. However, they come with notable limitations such as efficiency bottlenecks, scalability issues, and challenges in context integration. In this session, we will explore these shortcomings in detail and introduce you to cutting-edge innovations like RAG 2.0 and GraphRAG. Learn how RAG 2.0 enhances retrieval mechanisms to improve performance and accuracy, and discover how GraphRAG leverages graph structures for superior context management and richer information synthesis.
Let's delve into the fascinating world of RAG and its latest advancements together!
Constructing a Semantic Ingestion Pipeline
Build a seamless, continuous ingestion pipeline capable of processing diverse data formats such as PDFs, images, and markdown using Semantic Kernels Kernel Memory service. This session will guide you through the intricacies of creating a system that not only scans and updates files at regular intervals but also optimizes chat applications with robust, contextually aware responses using Retrieval-Augmented Generation (RAG).
How to Train Your Prompt Dragon and Protect Your Model
As the power and utility of Language Model (LLM) technology continue to grow, so too does the importance of Prompt Engineering - or Prompt Programming - in guiding its application. In this session, we'll provide a quick recap on how LLMs work before diving into the world of Prompt Engineering. We'll examine different types of prompt engineering and weigh the pros and cons of their use. But with great power comes great responsibility, and the risks of "prompt demons" in the form of security concerns with prompt injection cannot be ignored. To address these issues, we'll discuss the best practices for secure prompt engineering, and explore the future of prompt engineering and security with LLMs. Join us for an informative session on navigating the balance between innovation and security with Transformer models.
GTP and Codex: The Dynamic Duo of NLP or Skynet's First Step?
Large Language Models (LLMs) have transformed the way we process natural language. OpenAI's GPT and Codex models have taken NLP to a new level, making it possible to perform tasks such as language translation, question-answering, and text generation with unprecedented accuracy.
In this talk, we will discuss the inner workings of Transformer models to help us understand their societal impact, including their impact on information retrieval, job markets, privacy, and the environment. We will also discuss the ethical concerns surrounding LLMs, such as potential biases and their impact on society at large.
Global AI Community Bootcamp {Berlin} Sessionize Event
AI Community Day - Berlin Sessionize Event
Technical Summit 2024 EN Sessionize Event
Global AI Bootcamp 2024 Germany/Karlsruhe Sessionize Event
Microsoft Build 2023 After Party - AI & Metaverse Playground Berlin Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top