Manai Mortadha
AI/XAI Engineer @Netflix |AI Expert |XAI Researcher @Saint Mary's University |International AI Speaker
Halifax, Canada
Actions
AI Engineer |AI Expert |XAI Engineer @Netflix |XAI Researcher @Saint Mary's university |AI Consultant @Tegus & @Wivenn |Technical Reviewer@Packt |2024 AI Apprentice@Google | International AI Speaker(Linkedin Top Voice)
Area of Expertise
Topics
How to Decode the Brain of GenAI : Explainable AI (xAI) in Large Language Models (LLms)
This session explores the role of Explainable AI (XAI) in enhancing transparency and trust in Large Language Models (LLMs) like ChatGPT, Bard, and Gemini. Learn how XAI techniques uncover the reasoning behind LLM decisions, detect biases, and improve user trust. Through practical examples and cutting-edge research, we will show how XAI bridges the gap between complex AI models and human understanding, enabling more ethical and reliable AI applications.
Empowering Copilot with Explainable AI: Enhancing Transparency and Trust
As AI-powered tools like Copilot become integral to the Microsoft Power Platform, ensuring their transparency and trustworthiness is paramount. This session will explore the role of Explainable AI (XAI) in enhancing the interpretability of Copilot’s decision-making process, enabling users to better understand and trust the recommendations and actions taken by Copilot. Learn practical methods for integrating XAI into Copilot workflows to detect biases, improve user interaction, and foster ethical AI practices in real-world applications.
Empowering AI Applications with Apache Ignite: Real-Time Data Processing for Intelligent Systems
Explore how Apache Ignite's distributed in-memory computing powers real-time AI workflows. This session delves into integrating Ignite for scalable data processing, model training, and deployment. Learn to build efficient, transparent pipelines and leverage Ignite for Explainable AI (xAI) solutions.
XAI Unleashed: Bringing Clarity to the Complex World of AI
As AI systems become increasingly complex and embedded in our daily lives, the need for transparency, accountability, and trust in these models has never been greater. Explainable AI (XAI) stands at the forefront of this movement, bridging the gap between advanced algorithms and human understanding.
Join us for an insightful session where we’ll unravel the mysteries of XAI, exploring cutting-edge techniques that go beyond traditional AI. From interpretability methods and model diagnostics to the ethical implications of making AI decisions transparent, this session will delve deep into the current landscape and future directions of XAI.
You’ll learn how to transform opaque AI models into understandable, responsible systems that align with human values. Whether you’re an AI professional, researcher, or just curious about the field, this session will equip you with the knowledge and tools to build AI that not only performs but also earns trust.
Key Takeaways:
• Understanding the core principles of Explainable AI and why it matters
• Advanced techniques to make complex AI models more interpretable
• The intersection of ethics, responsibility, and transparency in AI
• Real-world applications and case studies of XAI in action
• Practical insights on integrating XAI into your AI projects
Come ready to challenge the status quo, rethink AI design, and explore how we can build a future where AI serves everyone, not just a few.
Demystifying AI: Unlocking Transparency with Explainable AI (XAI)
As AI systems grow more complex and integrated into daily life, the need for transparency, accountability, and trust is crucial. Explainable AI (XAI) bridges the gap between advanced algorithms and human understanding.
Join us for an insightful session exploring the core principles of XAI, cutting-edge interpretability techniques, model diagnostics, and ethical implications. Learn how to transform opaque AI models into responsible, understandable systems aligned with human values. Whether you’re an AI professional, researcher, or enthusiast, this session equips you with practical tools to build trustworthy AI.
Key takeaways include the importance of XAI, advanced techniques for model interpretability, and the intersection of ethics, responsibility, and transparency in AI. We’ll explore real-world applications and case studies, providing insights into integrating XAI into your projects. Challenge the status quo and discover how AI can serve everyone.
Unlocking the Power of AWS Generative AI: Practical Use Cases and Future Trends
AWS Generative AI is redefining what's possible in the world of AI and cloud computing, offering unprecedented capabilities for creating innovative solutions across industries. In this session, we will explore how AWS's suite of Generative AI services is pushing the boundaries of creativity, automation, and problem-solving. From transforming customer experiences with personalized content to automating complex workflows, we’ll dive into practical, real-world use cases demonstrating how AWS Generative AI is driving digital transformation. Attendees will gain insights into the latest advancements, learn best practices for implementation, and discover strategies to leverage AWS Generative AI to unlock new opportunities. Whether you're a seasoned AI professional or just starting your journey, this talk will provide valuable perspectives on how to harness the power of AWS Generative AI to stay ahead of the curve
AIOps: The Future of Automated IT Operations
In an era of increasing complexity and scale in IT environments, AIOps (Artificial Intelligence for IT Operations) is emerging as a game-changer. This session will explore how AIOps leverages machine learning and big data to automate and enhance IT operations, from incident detection to root cause analysis and remediation. We'll discuss the challenges of integrating AIOps into existing infrastructures and provide strategies for overcoming common obstacles. Attendees will learn how to implement AIOps to reduce downtime, improve system performance, and enable predictive maintenance, ultimately leading to more resilient and efficient IT ecosystems. Whether you're an IT professional, a data scientist, or a business leader, this session will offer valuable insights into harnessing the power of AIOps to drive innovation and operational excellence.
Cracking the Black Box: An Introduction to XAI"
Black Box: An Introduction to XAI." In an era where AI is revolutionizing industries and decision-making processes, understanding how AI models arrive at their conclusions is more crucial than ever. Enter Explainable Artificial Intelligence (XAI), the key to transparency and accountability in AI systems.
During this introductory session, we will guide you through the fundamental concepts and methodologies that make XAI an essential tool in the AI toolbox. You'll learn why transparency in AI is essential for responsible AI adoption and how XAI tools and techniques can unravel the black box, making AI models understandable and trustworthy.
We'll delve into real-world applications, demonstrating how XAI is reshaping industries like healthcare, finance, and autonomous vehicles, and how it impacts everyday life. Discover how XAI builds trust and accountability, benefiting both users and developers, and explore future directions in this ever-evolving field.
Whether you're a data scientist, AI enthusiast, business leader, student, or simply curious about the inner workings of AI, this session will provide you with essential insights. Join us as we embark on a journey to demystify AI and gain a deeper understanding of Explainable Artificial Intelligence. Together, we'll "Crack the Black Box" and make AI accessible, transparent, and responsible for all.
Azure AI Connect Sessionize Event Upcoming
DevFest Bujumbura 2024 Sessionize Event
JCONF PERU 2024 Sessionize Event
Tunisia Dev Days 2024 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top