Speaker

Dinoy Raj

Dinoy Raj

Product Engineer – Android @ Strollby | Droidcon Uganda ’25 & Droidcon Abu Dhabi 25 Speaker

Thiruvananthapuram, India

Actions

Product-focused Android developer and Droidcon speaker with strong expertise in Jetpack Compose, Kotlin, and on-device AI. I’ve built scalable, high-performance mobile apps with robust modular structures, efficient CI/CD pipelines, and delightful user experiences.

Currently building next-gen travel experiences at Strollby, where I lead Android development across hotel, flight, and activity bookings. My work spans multi-module architecture, Apollo GraphQL, payment integrations, and deep UI performance optimisations.

I’m currently exploring on-device AI, especially MediaPipe LLM inference and the Prompt API.

Beyond work, I built and published my own minimalist Android launcher, Simple Launcher - Minimalist applying Compose, design systems, and automated release workflows to real-world product challenges.

I’m also speaking at Droidcon Abu Dhabi ’25, Devfest Dubai'25 and Devfest Mumbai,25 where I’ll be sharing insights on building AI-enhanced Android experiences with Jetpack Compose and on-device intelligence.
I’ve previously spoken at Droidcon Uganda as well.

Area of Expertise

  • Information & Communications Technology

Topics

  • Android
  • jetpack compose
  • kotlin
  • Developing Android Apps
  • android developer
  • Kotlin Multiplatform
  • AI/ML Enthusiast
  • UI Testing
  • gradle
  • ai
  • gemini nano

Build Your Pocket Brain: Custom On-Device LLMs via MediaPipe on Android

MediaPipe's new LLM Inference API is leading the charge for Android developers. beyond simply running pre-trained models like Gemma on a device, we will dive deep into the power of customisation using Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning (PEFT) technique.

You'll learn the end-to-end workflow for creating a specialised, on-device LLM tailored to your app's unique domain. We will cover how to take a base model (like Gemma or Phi-2), fine-tune it with your own dataset using the PEFT library in Python, convert both the base model and the LoRA weights into the MediaPipe-compatible FlatBuffer format, and finally, integrate this custom-tuned model into an Android application.

we will demonstrate how to configure LlmInferenceOptions in kotlin to load both the base model and the .tflite LoRA file, unlocking hyper-personalised AI experiences that are fast, offline-capable, and completely private.

Key Takeaways

* Understanding when to use MediaPipe LLM Inference / gemini nano on-device generative AI solutions.

* Setup and configure MediaPipe LLM Inference for on-device generative AI.

* Expertise in LoRA fine-tuning to adapt LLMs like Gemma-2B or Phi-2 for specific use cases cost-effectively.

* Dive into Configuration options and multimodal prompting.

* Knowledge of deployment workflows, GPU-accelerated LoRA inference, and ethical AI practices that we should follow.

Beyond the Grid: Crafting a Custom Android Launcher from Scratch

As the creator of Simple Launcher, a minimalist text-based launcher, I've navigated the unique challenges of replacing one of Android's most fundamental components. This lightning talk will demystify the process of building a custom launcher experience that sits at the powerful intersection of the Android system and the user.

We'll cut through the complexity and get straight to the essentials:

What is a launcher, how it sit between us and android system? How can we build a custom launcher? Things to consider while building custom experience !!

The Core Foundation: We'll explore how to leverage key Android APIs like PackageManager to query and manage installed applications, and how a Clean Architecture approach can keep your launcher robust and maintainable.

Staying in Sync: A launcher must be a living part of the system. I'll demonstrate how to effectively use BroadcastReceivers to respond instantly to app installations, updates, and uninstalls, ensuring your UI is never out of date.

Navigating the Maze: Building a launcher isn't just about code; it's about compliance and compatibility. We'll cover the necessary permissions (including the crucial QUERY_ALL_PACKAGES) and discuss the real-world complexities of dealing with OEM-specific quirks and Android version fragmentation.

The Accessibility "Hack": In a surprising twist, I'll share how I creatively leveraged Accessibility Services—often a tool for assistance—to implement powerful features like an app blocker, and discuss the ethical considerations and pitfalls of this unconventional approach.

Supercharging Android Apps with On-Device AI: Gemini Nano & Prompt API

Mobile AI revolution is increasingly moving on-device, driven by demands for privacy, low latency, and offline capability. In this session, I’ll demonstrate how to leverage cutting-edge on-device AI tools including Gemini Nano, Prompt API, ML Kit Gen AI API and MediaPipe LLM Inference APIs to build intelligent Android apps entirely on-device.

Key Takeaways:

* On-Device AI Landscape - Understand the shift from cloud to on-device AI, the privacy benefits, and real-world use cases for features like smart reply, summarisation, image analysis, and breaks down how Android abstracts on-device intelligence through AI Core, LoRA fine-tuning, Private Compute, and hardware accelerators like NPUs..

* Getting Started with Gemini Nano - Walk through integrating Google’s Gemini Nano generative model into a modern Android app, highlighting both ML Kit Gen-AI APIs and Prompt API for custom scenarios.

* Prompt API demo - Through real-world demo of AI-driven activity booking search system built on Gemini Nano using prompt api (optimising using prompt design strategies and best practice from 18s -> 3s ), shows how to design, optimize, and productionise on-device AI features for modern Android apps.

* Production Considerations - Address model size, device compatibility, privacy, and performance optimisation lessons learned from deploying AI features at scale in consumer and enterprise Android apps.

* Beyond Gemini - MediaPipe, LiteRT, and Custom Models: Explore the MediaPipe ecosystem for LLM (large language model) inference on-device, and how to bring your own models using LiteRT/TensorFlow Lite for specialised tasks.

Session presented on Droidcon Uganda 2025 on 10th November.

Devfest Mumbai 2025 Sessionize Event Upcoming

December 2025 Mumbai, India

DevFest Bujumbura 2025 Sessionize Event Upcoming

December 2025 Bujumbura, Burundi

Mobile Developers Week Abu Dhabi 2025 Sessionize Event

December 2025 Abu Dhabi, United Arab Emirates

droidcon Uganda 2025 Sessionize Event

November 2025 Kampala, Uganda

Dinoy Raj

Product Engineer – Android @ Strollby | Droidcon Uganda ’25 & Droidcon Abu Dhabi 25 Speaker

Thiruvananthapuram, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top