Speaker

Zion Pibowei

Zion Pibowei

Head of Data Science & AI, Periculum

Lagos, Nigeria

Actions

Zion Pibowei is a seasoned data science and AI technical leader with rich experience building commercial data and AI products, executing data and AI strategies, and creating value streams across product delivery lifecycles. His 10 years' experience in data and AI cuts across advanced analytics, data science and engineering, machine learning system design and solution delivery, and generative AI implementation, with proven success executing high-value initiatives.

He currently leads data science, machine learning and AI initiatives at Periculum and manages a talented team of data scientists and ML engineers, doubling as both people manager and hands-on technical lead. His recent focus has been around AI engineering for low-resource multilingual conversational systems, implementation and deployment of a high-fidelity ML infrastructure for real-time fraud detection, and deep learning and computer vision for document understanding and verification.

Zion is passionate about Open Source and keen about building and deploying AI systems using light-weight and minimal technology stacks. He launched the AI Summer of Code in summer 2024, bringing together an impressive international faculty lineup to teach LLM and AI Engineering best practices to over 400 enthusiastic learners (https://github.com/zion-king/ai-summer-of-code). He is a prolific speaker, enjoying sharing his knowledge and insights on the practical implementation of AI systems.

Area of Expertise

  • Information & Communications Technology

Topics

  • Data Science & AI
  • Big Data
  • Machine Learning
  • Applied Machine Learning
  • Technology Strategy
  • Enterprise Analytics
  • MLOps
  • Cloud & DevOps
  • Decision Science
  • Quantitative Decision Analysis

MLOps for Mission-Critical Applications: Lessons from Building and Scaling Production ML Systems

Building production machine learning systems is hard. Even more challenging is building a scalable infrastructure for the iterative development, deployment and management of machine learning models. A simple ML system typically consists of a model or two, a dev pipeline with all the data transformations, a deployment pipeline, and a VM or server for hosting and running inference. Such a basic system is usually not designed for scale but for a proof of concept or MVP to prove a business case.

Once a business case is established and there's executive buy-in, it becomes crucial to build and operationalize a more efficient and scalable ML system. This is especially important for mission-critical applications, such as personalisation (recommendation, ad targeting) or risk and compliance (verification, fraud detection), where system fidelity and fault tolerance are critical for product success. Building such ML systems for scale requires an end-to-end process called Machine Learning Operations (MLOps).

In this session, we will explore the key components of an MLOps infrastructure, from tooling to best practices. We will begin by examining how MLOps shares lineage with DevOps but is fundamentally different. Then we will discuss why ML system design is a crucial first step in successful MLOps implementation. We will cover key elements of a ML system, including development, experiment tracking, model registry, feature store, orchestration, versioning, and deployment.

We will also talk about the 20% MLOps that drives scalability for the 80% ML System. This includes deployment strategies, online and offline evaluation, online experimentation, observability, and continuous retraining. We will discuss practical challenges of implementing these different MLOps components, motivated by lessons from building ML systems for high-risk applications such as consumer lending, compliance, and fraud detection. Emphasis will be laid on the importance of long-term execution in MLOps, avoiding the pitfalls of short-term solutions, and setting milestones to achieve visible short-term wins while scaling sustainably.

This session is targeted at data scientists, data engineers, analytics engineers, ML(Ops) engineers, backend/DevOps engineers, engineering managers and data leaders. The only prerequisite for attending is a basic understanding of machine learning. By the end of this session, participants will be better equipped to approach model development and deployment through the lens of ML systems. They will gain intuitive understanding of ML system design and implementation and the fundamentals of setting up and executing MLOps roadmaps for mission-critical applications.

Enterprise Knowledge Discovery in the Age of Generative AI

In the last 2 years, we have witnessed explosive growth in Generative AI adoption, driven by the early success of ChatGPT. Large language models (LLMs) suddenly became a topic of public interest, nearly 5 years after the introduction of the transformer architecture, the bedrock of modern language models. This wave has compelled enterprises to rethink their business strategy as many aim to integrate AI into their operations and generate the most value with it.

The early use cases of LLMs involved chat interfaces, though these applications were prone to hallucination. To address this problem, 2 key solutions became popular - finetuning and retrieval augmented generation (RAG). RAG was more handy as it was not only cost-effective but also allowed LLMs to augment their knowledge using additional context data at inference time, making it easy to fact-check answers and evaluate key metrics like context relevance, faithfulness, and generation accuracy. This enabled Talk to Your Document use cases and supporting both internal search-discovery needs and customer-facing virtual assistant interfaces.

The potential of generative AI goes beyond talking to documents but can also be a lever for business agility and data-driven transformation. Most enterprises want to do more with generative AI, to integrate it across all business processes, and to interact seamlessly with their data no matter what or where it is. In this session, we will explore how generative AI drives these opportunities. Starting with the current state of generative analytics and conversational business intelligence, we will "delve" into practical ideas and emerging trends such as LLM-enriched semantic layers, intelligent metric generation, RAG over semantic layers, RAG over databases, function calling, and agentic workflows.

Participants will discover how generative AI enables analysts to deliver data requests faster, speeding up time-to-insights for decision makers. They will also learn how advances with RAG and Agents improve upon traditional NLQ and search-based BI tools, fostering interactive experiences for both analysts and non-technical users. The session will advance into novel ideas for going beyond interactive analytics to seamless knowledge discovery at scale.

This session is designed for participants across all levels, technical or non-technical, and will be especially beneficial for enterprise leaders and non-technical stakeholders. Attendees will gain understanding of the state-of-the-art in applied generative AI and insights into the future of enterprise knowledge discovery.

Experimentation: Recipes for Scaling Commercial Data Products

Data products and machine learning solutions rarely deliver as much value as expected when they go live. Apart from production issues that affect machine learning model quality after deployment, there's a whole range of external factors for which a model may not improve product success. Many companies that do not understand this tend to focus mostly on algorithm/model development and model operationalization, without knowing how to clearly track how the presence of these solutions in their product impacts product success and revenue generation. In this session, we will look into how experimentation addresses this problem and eliminates severe bottlenecks for data and product teams. Experimentation is a powerful way of iterating over multiple variations of new models, products or features, and testing their impact in live environments. It enables an organisation to identify the interests and behaviours of various segments of its userbase and serve them accordingly. This talk is targeted at data science practitioners, product managers and senior leaders alike, and will be delivered on 3 major anchors - experimental design, experimental execution and experimental analysis. At the end of the session, participants would have learned about the significance of experimental design, and how to develop testable hypotheses, clear problem statements and clear outcome KPIs. They will also understand key processes in experimental execution from platformization and experimentation ownership, to deciding when to use A/B tests or quasi-experimental methods. The crux of the session will be experimental analysis, with focus on driving business value with analytical methods, supported with use cases from top companies championing this field. It’s my aim that by the end of this talk data practitioners and leaders will go back to their organisations with better understanding of how to implement experimentation initiatives, to massively improve their products/services and business growth potential.

DatafestAfrica 2024 Sessionize Event

October 2024 Lagos, Nigeria

DataFestAfrica Sessionize Event

October 2022 Lagos, Nigeria

Zion Pibowei

Head of Data Science & AI, Periculum

Lagos, Nigeria

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top