Speaker

Prachi Kedar

Prachi Kedar

AI/ML Engineer | Computer Vision & Generative AI Enthusiast

Brussels, Belgium

Actions

I’m an AI/ML engineer and researcher who loves turning complex ideas into practical real-world solutions, who thrives on building smart, scalable systems , from an intelligent automation to advanced AI modeling pipelines. I don’t just experiment with models, I deploy them, optimize them, and make them work where it matters. Over the past four years, I’ve worked across industries retail automation, aviation, and neuroscience building systems that blend innovation with impact. I’m passionate about computer vision, generative AI, and making cutting-edge tech accessible to developers everywhere.

Area of Expertise

  • Information & Communications Technology
  • Manufacturing & Industrial Materials
  • Physical & Life Sciences
  • Transports & Logistics

Topics

  • Computer Vision Machine Learning Generative AI NLP Deep Learning Data Science Python

: The Ghost in the Machine: Orchestrating the Go Netpoller for High-Performance I/O

Every Go developer takes for granted that they can spin up 100,000 goroutines to handle concurrent network requests without crashing the operating system. But beneath the surface of a simple net.Listen, a complex dance is happening between the Go Runtime, the Scheduler, and the OS kernel. This session pulls back the curtain on the Netpoller, the silent engine that transforms blocking, synchronous-looking Go code into non-blocking, asynchronous system calls. We will journey through the G-M-P scheduling model to see exactly what happens when a goroutine "parks" while waiting for data, and how the Netpoller utilizes epoll, kqueue, or io_uring to wake it back up with surgical precision. By understanding the interaction between memory management and network I/O, you will walk away not just with a deeper appreciation for Go’s internals, but with practical insights on how to profile and optimize high-throughput systems where every microsecond of latency counts.

TinyLLMs on the Edge: Running Compressed Language Models on Your Phone

In 2025, AI is breaking free from the cloud. With the rise of model compression, quantization, and optimized runtimes, we can now run compact Large Language Models—known as TinyLLMs—directly on mobile devices, laptops, and even low-power embedded hardware. This shift is changing how we think about AI applications, making them faster, more private, and more accessible to everyone.

In this lightning talk, we’ll explore the exciting new possibilities of running LLMs on the edge. We’ll cover the frameworks and toolchains that make this possible today, including ONNX Runtime Mobile, TensorFlow Lite, and Apple’s MLX, and discuss how developers can deploy sub-300M parameter models for real-world use cases. From offline summarization and chat assistants to real-time text classification and personal productivity tools, TinyLLMs open up use cases that no longer require constant connectivity or expensive cloud infrastructure.

We’ll also look at key challenges such as memory constraints, model quantization, and trade-offs between accuracy and efficiency—and discuss where the future of edge-based AI is heading.

By the end of the session, attendees will walk away with:

A clear understanding of why TinyLLMs matter in 2025,

A practical roadmap for experimenting with on-device AI,

Inspiration to build privacy-first, low-latency applications that fit in the palm of your hand.

If you’ve ever wanted to shrink an LLM to fit in your pocket—this talk is for you.

Developer DNA in the GenAI Era

Generative AI has fundamentally changed how we design, build, and interact with software. But with this rapid evolution, one key question remains: what skills do developers actually need to thrive in the GenAI era?

This lightning talk will explore the new “developer DNA” that goes beyond traditional coding. We’ll look at the core technical skills shaping AI development in 2025, including prompt engineering, model fine-tuning, vector databases, TinyML, and GenAIOps (MLOps for LLMs). Alongside these, we’ll discuss essential cross-cutting practices—such as responsible AI, multi-cloud strategies, and human-AI collaboration—that are becoming critical for modern software teams.

Drawing on experiences across industry, research, and freelancing, I’ll highlight practical insights on how developers can adapt their workflows, expand their skillsets, and future-proof their careers in this rapidly shifting landscape.

Attendees will leave with:

A clear skills roadmap for 2025 and beyond,

Practical tips for integrating GenAI tools into everyday development,

Inspiration to see AI not as a disruption, but as a career accelerator.

Whether you’re a beginner entering the AI space or an experienced developer navigating new trends, this session will help you build the mindset and toolkit to succeed in the era of GenAI.

GopherCon Europe 2026 in Berlin Sessionize Event Upcoming

June 2026 Berlin, Germany

DevFest Berlin 2025 Sessionize Event

November 2025 Berlin, Germany

Prachi Kedar

AI/ML Engineer | Computer Vision & Generative AI Enthusiast

Brussels, Belgium

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top