Speaker

Brackly Murunga

Brackly Murunga

Portfolio Data Scientist @ M-KOPA

Nairobi, Kenya

Actions

Brackly Murunga is a Machine learning researcher with a professional background in applied data science and machine learning with a career spanning half a decade in multiple industries including Tech, FMCG and Fintech.

He was the Lead AI/ML Engineer at Phindor LTD before transitioning to BAT Kenya as a data scientist. He currently is a portfolio data scientist at M-KOPA.

Area of Expertise

  • Finance & Banking
  • Information & Communications Technology
  • Manufacturing & Industrial Materials

Topics

  • Artificial Intelligence (AI) and Machine Learning
  • Data Science

Accountability in AI: Who’s Responsible When Machines Decide?

As AI systems shape critical decisions in areas like hiring and healthcare, ensuring accountability is crucial. This session will focus on building AI with strong guardrails to mitigate bias, manage ethical risks, and promote transparency. Attendees will explore frameworks that ensure responsibility throughout the AI lifecycle, from design to deployment, and learn strategies for identifying biases and managing risks. Whether you're a seasoned developer, ML practitioner, or novice, you'll gain practical insights on creating fair, ethical, and accountable AI systems for real-world applications.

Scaling LLM Fine-Tuning on AWS with Dask

Fine-tuning large language models (LLMs) is exciting—until your checkpoints hit gigabytes and datasets scale into the billions of tokens. How do you manage memory, scale training across machines, and fine-tune multiple models—without breaking your workflow or your budget?

The need for specialized LLMs is rapidly growing, especially in Africa, where local languages, cultural nuance, and domain-specific data require models that go beyond generic pre-trained baselines.

This session walks through how to scale LLM fine-tuning using Dask with AWS EC2 and ECS. You'll learn how to spin up clusters, orchestrate parallel jobs, and manage massive datasets efficiently using familiar Python tools.

Expect real-world examples of:
Running multiple fine-tuning jobs in parallel
Managing large tokenized datasets on S3
Optimizing compute cost using spot instances

Whether you’re building copilots or custom LLMs, you’ll leave ready to scale from notebook to cloud cluster—without rewriting your codebase.

Turbocharge Your Docs: Leveraging GenAI for Rapid Documentation Prototyping

Good documentation is tasty to consume, but not particularly fun to write as it often requires time and effort away from actual code. However, the explosion and recently ubiquitous of Generative ai tools unleashes unlimited capability as a background assistant to developers in helping them automatically generate initial documentation to their code.

In this talk we will explore how Generative AI in studio code can be the secret ingredient for producing high-quality documentation quickly. This session will highlight:

Speed Meets Quality: Learn how AI can help create initial drafts and refine technical content, ensuring rapid output without compromising clarity.

Offline Capabilities: Discover how leveraging local language models provides robust, offline generative functionalities, offering flexibility and enhanced control.

Best Practices & Pitfalls: Gain insights from early adopters on what works—and what doesn’t—in the realm of AI-driven documentation.

Attendees will walk away with practical strategies to incorporate these AI tools into their documentation workflows, streamlining the prototyping process while maintaining comprehensive, user-friendly, and accurate documentation.

Kubeflow Unleashed: Harnessing Open-Source MLOps for Scalable,Cost-Effective End-to-End AI Pipelines

Africa faces unique challenges in adopting DevOps especially in the wake of machine learning and AI —limited budgets, infrastructure constraints, and the need for secure, scalable solutions. Kubeflow offers a game-changing solution by providing an open-source MLOps platform that reduces costs, simplifies operations, and scales effortlessly with Kubernetes. It empowers African organizations to automate AI workflows, secure sensitive data, and deploy models at scale—all without expensive proprietary tools.

This session will explore how Kubeflow tackles these challenges, unlocking the potential for African innovators to build and manage AI pipelines that meet local needs and global standards, affordably and efficiently. Whether you a seasoned ML practitioner, DevOps guru or student looking to learn, this session will unpack Kubeflows A-Z to show you its potential and use cases.

Privacy-Preserving Framework for collaborative machine Learning on sensitive Data

Deep learning Algorithms are data-hungry, the more data they have the more they are able to generalize really well on unseen data. Even though, efforts have been put on gathering and publishing huge datasets for unsensitive data to achieve the aforementioned effect, the same cannot be done for sensitive data for obvious reasons. This has made development of large robust models in areas with sensitive data like finance, healthcare limited to large organizations with lots of data.

The alternative, sharing of data across healthcare/financial practitioners, could help with development of capable models due to the variety of rich data that they possess, however, data security and privacy concerns arise .

In my session I intend to showcase a framework for sharing sensitive information across organizations for collaborative training of one deep learning model in a privacy preserving way using autoencoding and differential privacy.

Brackly Murunga

Portfolio Data Scientist @ M-KOPA

Nairobi, Kenya

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top