Carey Payette
Financial Finesse
Columbus, Ohio, United States
Actions
Carey Payette is a Principal Engineer at Financial Finesse, an Azure AI MVP, Microsoft Certified Trainer and part of the Progress Ninja community. Known for turning cutting-edge AI concepts into practical, production-ready solutions, she specializes in architecting intelligent, agentic systems, ensuring AI system and data security, and designing scalable platforms that solve real-world problems. She brings additional depth through her work in machine learning, IoT, and data analytics. Carey combines deep technical expertise with a passion for creating impactful, forward-thinking technology.
Links
Area of Expertise
Topics
Harnessing the Power of Multi-Agent Systems: Patterns, Pitfalls, and Practical Applications
Multi-agent systems are increasingly shaping how we build distributed, intelligent applications. This session explores the strengths and limitations of multi-agent architectures, drawing on real-world scenarios to highlight when and why they shine—or fail. We’ll review common development patterns, integration strategies, and a decision-making framework to help you assess whether a multi-agent approach fits your needs.
Practical Strategies for improving RAG system accuracy (Workshop)
This workshop arms attendees with practical strategies for improving the performance of Retrieval Augmented Generation (RAG) systems. We'll explore techniques for optimizing document processing, embedding generation, and knowledge search operations. Hands-on-labs cover various implementations using popular services and libraries.
Attendees should have global admin or owner permissions on an Azure subscription and have the ability to write and run both Python and C# code.
Practical Strategies for improving RAG system accuracy
This session provides practical strategies for improving the performance of Retrieval Augmented Generation (RAG) systems. We'll explore techniques for optimizing document processing, embedding generation, and knowledge search operations. The session covers practical implementations using popular services and libraries.
AI Without Fear: Secure Code Execution for LLMs in Azure Container Apps
In the rapidly evolving landscape of Artificial Intelligence, the ability to execute code securely in a performant way is paramount. This presentation introduces Azure Container Apps dynamic sessions as a robust solution for running code within an AI system. By leveraging isolated, sandboxed environments, the system can confidently execute potentially untrusted code, including user-provided scripts or code generated by Large Language Models (LLMs) while mitigating risks and ensuring the integrity of the system. This session will equip you with the knowledge to confidently integrate secure code execution into your AI solutions, minimizing vulnerabilities and maximizing trust.
Terraform in Azure Workshop
Infrastructure as Code (IaC) has become a critical part of DevOps, allowing for declarative definition, configuration, deployment, and management of simple and complex IT solutions. In this session, we’ll review the importance of IaC and how it fits into the Continuous Delivery aspect of DevOps. In addition, you will get hands-on with Terraform with labs and start deploying solutions to the Azure cloud!
Introduction to Azure Infrastructure as Code
Infrastructure as Code (IaC) has become a critical part of DevOps, allowing for declarative definition, configuration, deployment, and management of simple and complex IT solutions. In this session, we’ll review the importance of IaC and how it fits into the Continuous Delivery aspect of DevOps. In addition, you will get a taste of different IaC technologies, such as ARM, Bicep, Terraform, Ansible, and Pulumi, to manage your Azure resources.
Designing AI Systems for a Global Audience on Azure
Users expect answers in their language, even when your content was never written for them. In this session, we’ll explore two common patterns for multilingual AI systems on Azure: one where documents are available in each language and one where the corpus exists in only a single language. Through live demos, we’ll compare locale-aware retrieval, multilingual search, document translation, and translation after retrieval, and show when an LLM responding in the user’s language is enough—and when the retrieval architecture has to do more of the work.
Multi-Agent Without the Bloat: Designing Lean AI Systems on Azure
Multi-agent systems get expensive and chaotic when every agent receives the full conversation, full tool history, and full memory state. In this session, we’ll look at how to design leaner AI systems on Azure using Azure OpenAI with Microsoft Agent Framework or LangGraph, focusing on the architectural patterns that reduce context bloat without losing capability. We’ll cover scoped context, targeted handoffs, short-term versus long-term memory, planner and router patterns, and techniques for keeping each agent focused on only the information it actually needs. Attendees will leave with practical design patterns for building multi-agent systems that are faster, cheaper, easier to debug, and easier to scale.
Carey Payette
Financial Finesse
Columbus, Ohio, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top