Eyal Wirsansky
Staff AI Engineer | Adjunct AI Professor | Author of ‘Hands-On Genetic Algorithms with Python’ | JUG and GDG Community Leader
Jacksonville, Florida, United States
Actions
Eyal Wirsansky is a Staff AI Engineer and veteran software developer currently focused on designing agentic AI solutions for the healthcare industry. As a dedicated educator and community leader, he serves as an adjunct professor of AI at Jacksonville University and leads both the Jacksonville Java User Group and the AI for Enterprise Virtual User Group. He is also the author of "Hands-On Genetic Algorithms with Python".
Area of Expertise
Topics
The Wonderful World of Bio-Inspired Computing
Bio-inspired computing is a family of algorithms based on models of biological systems and behaviors. This talk will explore the wonders of these methods and the problems they can solve. Discover how genetic algorithms imitate the process of natural evolution to find the best solutions for given problems. Learn how genetic programming evolves computer programs to accomplish specific tasks. See how ant colony optimization mimics the way certain species of ants locate food and prioritize resources. Additionally, learn about particle swarm optimization, based on the behavior of flocks of birds, where individuals work together toward a common goal. We will also cover several frameworks and resources to help you get started.
Key takeaways:
* Understand the basic principles and concepts behind algorithms modeled after biological systems and behaviors
* Learn how genetic algorithms simulate natural evolution to identify optimal solutions for complex problems
* See real-world examples of how these bio-inspired methods can be applied across various industries to solve challenging problems
Based on my book 'Hands-On Genetic Algorithms with Python', 2nd edition
From Prediction to Intuition: Explainable AI with Counterfactuals and Genetic Search
Counterfactual explanations — answering the question “what would need to change for a different outcome?” — are among the most powerful tools in the Explainable AI toolbox. They bridge the gap between abstract model reasoning and actionable insights. In this talk, we go beyond conventional methods and explore how genetic algorithms can evolve counterfactuals that are both realistic and actionable, offering fresh ways to understand data and model behavior.
Drawing from real-world scenarios and code examples using the German Credit Risk dataset, we’ll demonstrate how to:
* Use genetic algorithms to search for minimal, plausible input changes that flip model predictions.
* Evaluate and constrain counterfactuals for realism and interpretability.
* Detect potential model flaws and dataset biases through systematic “what-if” analysis.
Key Takeaways:
* Generate counterfactual explanations with genetic algorithms to enhance transparency and trust.
* Reveal model weaknesses and dataset flaws through structured “what-if” analysis.
* Integrate counterfactual techniques into real-world AI workflows with practical Python examples.
Audience:
This session is ideal for data scientists, ML practitioners, and AI educators who want practical, optimization-driven tools for explaining black-box models. Whether you’re designing responsible models, auditing decisions, or teaching interpretability, you’ll leave with strategies to evolve your explanations — literally.
Level: Intermediate
Keywords: Explainable AI, Responsible AI, Counterfactuals, Genetic Algorithms, Model Interpretability, Python, Optimization
Based on my book 'Hands-On Genetic Algorithms with Python', 2nd edition
Safeguarding LLM-Powered Apps with Incoming Guardrails
LLM-powered applications and agentic workflows introduce a new kind of entry point into your system: one that behaves somewhat like an API, but is far less predictable. That shift is part of what makes AI engineering different from traditional software engineering. In addition to building application logic, we now need to account for model behavior, misuse, and risk before a request ever reaches the model.
This implementation-minded talk presents a practical approach to adding an incoming guardrails layer in front of LLM-powered applications. After a brief refresher on how LLMs work, what agents are, and how AI engineering extends familiar software engineering practices, we will walk through a reference architecture for screening requests before they hit the model.
We will cover common checks such as prompt injection, malicious intent, toxicity, and out-of-scope requests, as well as higher-risk situations like potential self-harm or medical emergencies that may require escalation rather than generation. The focus throughout is on architecture and implementation: how to think about guardrails as a real software component rather than a vague safety add-on.
Attendees will leave with a practical mental model and a simple reference design they can adapt to their own stack.
Who should attend:
Software engineers, architects, tech leads, and AI engineers who are building or planning LLM-powered applications, copilots, or agentic workflows and want a practical approach to safety and control.
Tags:
LLM, Generative AI, AI Engineering, Software Architecture, AI Safety, Guardrails, Agentic Workflows, Application Security
Before You Build an Agent: Practical Architecture for Production AI Applications
As interest in agentic AI grows, many teams jump too quickly from "LLM-powered feature" to "autonomous agent." The result is often unnecessary complexity, weaker control, and systems that are harder to test and maintain.
In practice, many successful AI applications begin with simpler patterns: a single LLM call, a routing step, a prompt chain, or a bounded workflow with clearly defined tool use. Agentic behavior can be powerful, but it should be introduced deliberately, where it adds clear value.
This talk presents a practical, software-engineering-focused approach to building production AI applications by starting with workflows before agents. We will look at how to choose the right level of autonomy for a problem, how to separate deterministic application logic from model-driven behavior, and how to apply orchestration patterns such as routing, chaining, and evaluation loops within a broader architecture.
We will also discuss how these patterns fit with structured outputs, guardrails, tool boundaries, and escalation paths, so that AI capabilities remain useful without becoming chaotic. The emphasis throughout is on architecture and implementation, not hype: how to build systems that are easier to reason about, safer to operate, and more maintainable over time.
Attendees will leave with a practical mental model for deciding when a workflow is enough, when an agent is justified, and how to build either one in a way that supports real production needs.
Key takeaways
- How to decide whether a feature needs a simple LLM call, a workflow, or a more autonomous agent
- Practical orchestration patterns for production AI applications
- How to keep control, safety, and maintainability in the surrounding software architecture
Target audience
Software engineers, architects, tech leads, and AI engineers building or planning AI-powered applications.
Suggested tags:
AI engineering, agentic AI, LLMs, software architecture, production AI, workflows
Compact Version:
As teams rush to add "agents" to their applications, many introduce more complexity than they actually need. In practice, the best AI applications often start with simpler patterns such as a single LLM call, routing, prompt chaining, or a bounded workflow with defined tool usage.
This talk presents a practical approach to building production-ready AI applications by starting with workflows before agents. We will examine how to choose the right level of autonomy, separate deterministic application logic from model-driven behavior, and design systems that are easier to reason about, test, and operate.
Attendees will leave with a practical framework for deciding when a workflow is enough, when an agent is justified, and how to build either one in a way that fits real software systems rather than just demos.
Beyond LLMs: Hybrid AI Patterns for Real Applications
Large language models have made it easier than ever to add AI capabilities to applications, but they are not a universal solution. In production systems, asking an LLM to do everything can lead to unnecessary cost, weaker control, and architectures that are harder to test and maintain. Many real applications work better when LLMs are combined with more traditional techniques such as rules, retrieval, scoring, search, or optimization.
This talk presents a practical, engineering-focused approach to hybrid AI design. We will look at where LLMs shine, where they struggle, and how to decide which parts of a system should remain deterministic or algorithmic. Rather than treating the model as the application, we will explore how to use it as one component within a broader architecture that may also include workflow logic, retrieval pipelines, classification, ranking, or search-based methods.
The goal is not to argue against LLMs, but to show how to use them more effectively by pairing them with the right supporting techniques. Attendees will leave with a clearer framework for choosing between LLM-driven behavior and classical approaches, along with practical design patterns for building AI-powered applications that are more reliable, maintainable, and production-ready.
Key takeaways
- How to identify which parts of an application are a good fit for LLMs and which are better handled by classical logic or algorithms.
- Practical patterns for combining LLMs with retrieval, rules, ranking, search, and workflow orchestration.
- A more disciplined architecture mindset for building AI-powered applications in production.
Target audience
Software developers, architects, tech leads, and AI engineers who want a practical framework for applying modern AI techniques without over-engineering or over-relying on LLMs.
Suggested tags
AI engineering, hybrid AI, LLMs, software architecture, applied AI, production systems
Compact version
Large language models are powerful, but they are not the whole solution. In production systems, many applications work better when LLMs are combined with traditional techniques such as rules, retrieval, scoring, search, or optimization rather than being asked to do everything on their own.
This talk presents a practical approach to hybrid AI design, showing how to decide which parts of a system should be LLM-driven and which should remain deterministic or algorithmic. Attendees will leave with a framework and design patterns for building AI-powered applications that are more reliable, maintainable, and production-ready.
Optional alternate titles
Hybrid AI for Real Applications: Where LLMs End and Classical Algorithms Begin
LLMs Are Not Enough: Practical Hybrid AI for Production Systems
Building Better AI Applications with LLMs, Rules, Retrieval, and Search
A Better Way to Build AI Applications: Hybrid Intelligence in Practice
Unlocking the Secrets of the Mystery-Word Game: A Journey Through NLP and Genetic Algorithms
This session introduces the basics of Natural Language Processing and word embeddings, highlighting their application in popular online games like Semantle. Discover how genetic algorithms can enhance game performance by creating an intelligent player that guesses the mystery word based on semantic similarity. We will explore the game's mechanics, learn the principles of genetic algorithms, and present a live demonstration of our AI player in action. Gain insights into the broader implications and future potential of integrating these advanced technologies. Join us to explore the innovative intersection of NLP, AI, and game design.
Key takeaways:
* Gain a foundational understanding of NLP and the concept of word embeddings, with a focus on their application in semantic similarity tasks.
* Discover how genetic algorithms can be employed to create a sophisticated player for the Mystery-Word game. Understand the principles behind genetic algorithms and their optimization capabilities.
Based on my book 'Hands-On Genetic Algorithms with Python', 2nd edition
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top