Most Active Speaker

Alan Smith

Alan Smith

AI Developer, Active Solution

Stockholm, Sweden

Actions

Alan Smith is a Cloud & AI Trainer, Mentor & Coach at Active Solution in Stockholm, Sweden. He has a strong hands-on philosophy and focusses on embracing the power and flexibility of cloud computing to deliver engaging and exciting demos and training courses.

Alan has held the MVP title since 2005, and is currently an AI MVP. He is in the organization team for the CloudBurst and AI Burst Conferences.

Badges

Area of Expertise

  • Information & Communications Technology

Topics

  • Artificial Inteligence
  • Machine Leaning

Drop the Bass with Embedding, Vectors and Nearest Neighbor Search

Text embedding and vector-based search provide great capabilities for searches based on the meaning, or sentiment, of text rather than using keywords. These techniques are often used with great effect in retrieval augmented generation in LLM based “chat with your data” solutions.

Embedding and vector searches can be used in many other scenarios besides text. If you can embed data entities to a vector of numbers, you can use a vector search to find similar entities, and this includes images, faces, sounds, video and even music.

In this demo-intensive session Alan will explore the concepts of embedding and nearest-neighbor search. The analysis and embedding of data will be explained, along with techniques to vectorize text, images and music. Alan will show how text and image embedding can be leveraged, and then show similar techniques used to create a vector search using the beat signatures in electronic dance music. This can then be used to find melodically similar beat patterns in tracks, allowing code to mix them seamlessly, creating a stunning audio experience!

Well, that’s the theory, can this work in practice?

Join this session and learn about the power and versatility of embedding and vector-based searching.

Ya’ll ready for this?

Inside GPT – Large Language Models Demystified

Natural language processing using generative pre-trained transformers (GPT) algorithms is a rapidly evolving field that offers many opportunities and challenges for application developers. But what is a generative pre-trained transformer, and how does it work? How can you leverage the latest advances in GPT algorithms to create engaging and useful applications? Can my business benefit from creating a GPT powered chat bot?

In this demo intensive session Alan will take a deep dive into the architecture of GPT algorithms and the inner workings of ChatGPT. The journey will begin by looking at the fundamental concepts of natural language processing, such as word embedding, vectorization and tokenization. He will then demonstrate how you can apply these techniques to train a GPT2 model that can generate song lyrics, showing the internals of how word sequences are predicted.

Alan will then shift the focus to larger language models, such as ChatGPT and GPT4, demonstrating their power, capabilities, and limitations. The use of hyperparameters such as temperature and frequency penalty will be explained and their effect on the generated output demonstrated. He will then cover the concepts of prompt engineering and demonstrate how Retrieval Augmented Generation (RAG) patterns can be leveraged to create a ChatGPT experience based on your own textual data.

Join me for this session if you want to learn how to harness the power of GPT algorithms in your own solutions.

GPT vs Starcraft II – Strategic Decision Making using Large Language Models

Starcraft II requires complex strategy decisions to be made in real-time. Planning resources, building the optimal structures and units, when to upgrade and when to defend or attack will all be critical to the success in a game. Large language models, such as GPT o1, have shown impressive performance on various natural language tasks, such as text summarization, question answering, and text generation. But how well can they make strategic decisions in a dynamic and competitive environment?

In this demo intensive session Alan will explore the capabilities of large-language models for strategic decision making. He will explain the strategy decisions that need to be made in a Starcraft II game, and what makes it an ideal scenario for exploring and evaluating the capabilities of GPT models. Alan will then focus on the techniques for leveraging large language models for strategic decision making, including prompt engineering and state description as well as parsing and understanding the response messages. He will also discuss different scenarios where large language models can be leveraged in strategic decision making.

Join me for this session if you want to learn more about using large language models in strategic decision-making processes, or just sit back and watch GPT destroy the Zerg.

LLM Agents Unleashed: Building with Function‑Calling Models, Tools and MCP

Large language models have evolved from passive text generators into active, tool‑using agents capable of reasoning, taking actions, and orchestrating complex workflows. With the rise of the Model Context Protocol (MCP), standardized function calling, and agent‑centric application patterns, developers are now building systems where LLMs collaborate with APIs, data sources, and business logic in real time. The question is no longer “How do I prompt a model?” but “How do I design an ecosystem where models can safely and reliably get things done?”

This workshop dives deep into modern agent development using Microsoft Foundry, OpenAI, and third‑party providers that support MCP and advanced function‑calling interfaces.

You’ll learn how today’s agentic systems work, how they reason, how they choose tools, and how to design the surrounding infrastructure that keeps them safe, predictable, and aligned with business requirements.

Key questions we’ll explore include:
• How do agent architectures differ from traditional LLM applications?
• What is the Model Context Protocol, and how does it enable interoperable, pluggable AI ecosystems?
• How do function‑calling models reason about which tools to use — and how do you design those tools effectively?
• How can agents integrate with enterprise data, APIs, and workflows while maintaining safety, governance, and auditability?
Through hands‑on exercises, you’ll build:
• Function‑calling interfaces that expose business capabilities to LLMs
• MCP servers and clients that allow models to interact with tools, data sources, and services
• Retrieval‑augmented agents that ground their decisions in organizational knowledge
• Agent workflows that chain reasoning, tool use, and verification steps
• Safety and evaluation layers that ensure agents behave reliably in production

By the end of the workshop, you’ll understand how to design, build, and operate agentic AI systems that go far beyond chat — systems that take action, automate processes, and integrate deeply with your applications and data.

Who should attend?
This workshop is ideal for developers and data scientists looking to deepen their understanding of generative AI and LLMs and integrate them into their projects.
Programming experience in C# or Python will be required for most of the hands-on labs.

What should you bring?
• Laptop with a development environment suitable for Python or C# development.
• Access to Microsoft Azure and Microsoft Foundry or the OpenAI API services.

AI Assisted Development with Chat GPT and GitHub Copilot

Large Language Models have evolved quickly and are revolutionizing the way people work and engage with AI. Models such as Chat GPT and GitHub Copilot can generate code in almost any programming language and identify bugs and issues in code. Leveraging the power of these models to improve developer productivity is currently a strong focus for many companies and developers.

AI assisted code generation also raises important issues and questions.

• Can AI generated code be trusted to function correctly?
• Will AI replace developers in the workforce?
• How can AI be leveraged in the best way to improve productivity?

This workshop will provide hands-on experience of leveraging Chat GPT and GitHub Copilot to assist in the software development process. AI will be leveraged to assist with the following tasks:

• Brainstorming ideas and suggesting features.
• Generating code for a prototype application.
• Creating test code.
• Identifying issues and fixing bugs.
• Generating images and UI graphics.

There are many different scenarios that could be used for the hands-on implementation, some examples are provided below.

Integrating with Azure Services
The Azure platform provides an extensive range of cloud-based platform as a service (PaaS) services enabling developers to build scalable and reliable applications. Starting with a simple web application, you will leverage Chat GPT and GitHub Copilot to extend the application to integrate with other Azure services to enhance the functionality.

Game development in Python
The PyGame library provides a quick and productive entry to developing basic games in Python. The extensive range of sample code in the public domain means that Chat GPT and GitHub Copilot are able to generate usable code for a starting project. This can then be extended using generated code to enhance the game, trouble shoot issues and develop and test features. Generative AI can also be used to create graphics and artwork for the game.

File -> New -> Project…
Chat GPT and GitHub Copilot are excellent tools for learning a new programming language, improving your programming skills and learning new techniques or SDKs. Feel free to choose your own scenario using Chat GPT and GitHub Copilot to accelerate your learning experience. When selecting a scenario, it is best to focus on exploring a new area, technology or language.

Drop the Bass with Embedding and Vectors in Azure AI Search

Azure AI search provides great capabilities for keyword search and AI skillsets, and now has the capability for text embedding and vector-based searches. Vector-based searches can be used in many other scenarios besides text. If you can embed, convert data items to numbers, you can search for similar items, and this includes images, sounds, video and even music.

In this demo-intensive session Alan will explore the concepts of embedding and nearest-neighbor search. The analysis and embedding of data will be explained, along with techniques to vectorize text, images and music. He will then demonstrate how vector-based indexes can be created in Azure AI Search using both the Azure portal and programmatically. Alan will show how text and image embedding can be leveraged, and then show the techniques used to create an index using the beat signatures in music. The search index can then be used to find similar sounding beat patterns in songs, allowing code to mix tracks seamlessly, creating a stunning AI-augmented audio experience.

Well, that’s the theory, can this work in practice? Join this session and learn about the power and versatility of vector-based searching.

Ya’ll ready for this?

AI Agents In-Depth – Function Calling, MCP and Tool Use Under the Hood

Agentic AI enables large language models to use tools to interact beyond system boundaries. From no-code implementations in Copilot Studio to developing sophisticated solutions using LangChain and Semantic Kernel, developers of all levels can leverage tools to create agentic solutions.

But how does agentic AI work under the hood?
How are models able to understand when to use a tool and how to use it?

In this demo intensive session Alan will explain the concepts of agentic AI in detail. Starting with the basics of function calling in the Open AI models, he will cover the techniques used to provide tools to models, and how the models are able to understand and use the tools. The use of prompting to guide models and improve tool call behavior will be covered, along with guidelines for adding tools to your solution. Alan will also cover the protocols used by Open AI models as well as how the model context protocol is used to provide compatibility between services and LLM solutions.

Join this session if you want to understand what goes on under the hood of function calling, MCP and agentic AI.

Alan Smith

AI Developer, Active Solution

Stockholm, Sweden

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top