Most Active Speaker

Dinoy Raj

Dinoy Raj

Product Engineer – Android @ Strollby | Droidcon Uganda ’25 & Droidcon Abu Dhabi 25 Speaker

Thiruvananthapuram, India

Actions

Android developer and Droidcon speaker focused on creating modern apps using Jetpack Compose, Kotlin, and on-device AI.

I’ve built scalable, high-performance mobile apps with strong modular structures, efficient pipelines, and great user experiences. I’m currently creating next-generation travel experiences at Strollby, where I work on hotel, flight, and activity bookings. My work involves multi-module architecture, Apollo GraphQL, payment integrations, and improving UI performance.

I’m exploring on-device AI, particularly MediaPipe LLM inference and the Prompt API. Outside of work, I created and published a minimalist Android launcher called Simple Launcher - Minimalist, using Jetpack Compose, Integrating custom design systems, and automated release workflows.

Spoken at Droidcon Abu Dhabi ’25, Droidcon Uganda '25, DevFest Dubai ’25, DevFest Mumbai ’25, on AI-driven Android development with Jetpack Compose and on-device intelligence. Speaker at FOSSASIA 2026.

Let’s connect if you’re working on Android, Compose, or building mobile-first products.

Badges

  • Most Active Speaker 2025

Area of Expertise

  • Information & Communications Technology
  • Region & Country
  • Travel & Tourism

Topics

  • Android
  • Kotlin
  • jetpack compose
  • Kotlin Multiplatform
  • android developer
  • Artificial Inteligence
  • Mobile
  • Mobile Development
  • mobile app development
  • Developing Android Apps
  • AI/ML Enthusiast
  • UI Testing
  • gradle
  • ai
  • gemini nano
  • GraphQL
  • Apollo GraphQL
  • Java
  • Swift
  • SwiftUI
  • flutter
  • iOS
  • python
  • LoRa
  • DevFest
  • Google
  • android app development
  • Mobile Apps
  • Mobile Accessibility
  • Mobile Applications
  • Google Devfest
  • Google Developer Group
  • Google Developer Groups
  • Android Development
  • Android Tools
  • Android & iOS Application Engineering
  • Accessibility
  • Firebase
  • JetBrains
  • Jetpack
  • Jetpack Glance
  • Kotlin/Android
  • Kotlin/Wasm
  • Kotlin Notebook
  • Kotlin/Native
  • Kotlin Coroutines
  • Introduction To Kotlin
  • kotlin con Android
  • Kotlindl
  • Gemini
  • Gemini API
  • Google Gemini
  • AI Agents
  • droidcon
  • Mediapipe
  • Andorid architecture
  • On Device ML
  • On Device AI
  • Generative AI
  • LLMs

Architecting the Shared Layer: GraphQL Best Practices in Kotlin Multiplatform

Implementing Clean Architecture in Kotlin Multiplatform (KMP) requires a robust data layer that abstracts complexity away from the UI. When adding GraphQL into the mix, the challenge lies in maintaining strict separation of concerns while leveraging the power of the graph across Android and iOS. This lightning talk explores architectural patterns for integrating Apollo Kotlin directly into the commonMain source set.

We will examine how to treat the KMP shared module as the authoritative source of truth, ensuring that platform-specific UIs remain purely reactive.

Takeaways

Shared Normalized Cache: Leveraging Apollo's cache within the shared Data layer to provide a unified, offline-first source of truth that keeps Android and iOS states perfectly synchronized.

Schema-Driven Modularity: Strategies for isolating generated GraphQL network models within the Data layer and using mappers to expose pure Domain entities, preventing API details from leaking into business logic.

Type-Safe Error Boundaries: Implementing a unified Result wrapper pattern that intercepts GraphQL partial failures in the Shared Repository, translating them into clean Domain states for safe UI consumption

Building Private Brain on Android: Offline RAG with Vector Databases & MediaPipe

Privacy-conscious users and business needs are pushing a shift toward "Local-First AI." Running a large language model (LLM) on a device is a good beginning, but the real challenge is making that model smart with your own private data without using the cloud. This session looks at the setup of Offline Retrieval-Augmented Generation (RAG) on Android.

We will explore how to implement a local vector search using high-performance databases like ObjectBox or Couchbase Lite to store and query embeddings. You will learn how to create a smooth pipeline that sends real-time local information into open models (on device ), such as Gemma, through the MediaPipe LLM Inference API. We will also address an important architectural question: when should you invest in LoRA fine-tuning instead of choosing the flexibility of RAG?

Key Takeaways:
- Architecting Local RAG
- Vector DB Implementation
- Real-time Context Injection
- LoRA vs. RAG
- Performance Optimisation

Write Once, Query Everywhere: Designing a Shared GraphQL Architecture in Kotlin Multiplatform

As Kotlin Multiplatform (KMP) grows, the challenge moves from sharing simple utility code to handling complex data layers across Android, iOS, Desktop, and Web. GraphQL provides a type-safe and efficient solution for modern APIs. With Apollo Kotlin’s strong multiplatform support, developers can now share all their networking and data logic.

In this session, we will focus on how to implement Apollo GraphQL in a KMP architecture. Using our experience in building scalable Android apps with Jetpack Compose, we will look at how to apply those skills in a multi-platform setup.

Key takeaways include:

- Architecture: Setting up a shared data module that serves Android (Compose), iOS (SwiftUI), and Desktop.

- Type-Safety: Using Apollo’s code generation to keep a single source of truth for your API schema across all platforms.

- Best Practices: Managing caching, authentication headers, and reactive queries with Kotlin Coroutines and Flow.

- Real-world hurdles: Handling platform-specific configurations and improving network performance for a consistent user experience.

Beyond the Grid: Crafting a Custom Android Launcher from Scratch

As the creator of Simple Launcher, a minimalist text-based launcher, I've navigated the unique challenges of replacing one of Android's most fundamental components. This lightning talk will demystify the process of building a custom launcher experience that sits at the powerful intersection of the Android system and the user.

We'll cut through the complexity and get straight to the essentials:

What is a launcher, how it sit between us and android system? How can we build a custom launcher? Things to consider while building custom experience !!

The Core Foundation: We'll explore how to leverage key Android APIs like PackageManager to query and manage installed applications, and how a Clean Architecture approach can keep your launcher robust and maintainable.

Staying in Sync: A launcher must be a living part of the system. I'll demonstrate how to effectively use BroadcastReceivers to respond instantly to app installations, updates, and uninstalls, ensuring your UI is never out of date.

Navigating the Maze: Building a launcher isn't just about code; it's about compliance and compatibility. We'll cover the necessary permissions (including the crucial QUERY_ALL_PACKAGES) and discuss the real-world complexities of dealing with OEM-specific quirks and Android version fragmentation.

The Accessibility "Hack": In a surprising twist, I'll share how I creatively leveraged Accessibility Services—often a tool for assistance—to implement powerful features like an app blocker, and discuss the ethical considerations and pitfalls of this unconventional approach.

Build Your Pocket Brain: Open On-Device LLMs via MediaPipe on Android

MediaPipe's new LLM Inference API is leading the charge for Android developers. beyond simply running pre-trained models like Gemma on a device, we will dive deep into the power of customisation using Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning (PEFT) technique.

You'll learn the end-to-end workflow for creating a specialised, on-device LLM tailored to your app's unique domain. We will cover how to take a base model (like Gemma or Phi-2), fine-tune it with your own dataset using the PEFT library in Python, convert both the base model and the LoRA weights into the MediaPipe-compatible FlatBuffer format, and finally, integrate this custom-tuned model into an Android application.

we will demonstrate how to configure LlmInferenceOptions in kotlin to load both the base model and the .tflite LoRA file, unlocking hyper-personalised AI experiences that are fast, offline-capable, and completely private.

Key Takeaways

* Understanding when to use MediaPipe LLM Inference / gemini nano on-device generative AI solutions.

* Setup and configure MediaPipe LLM Inference for on-device generative AI.

* Expertise in LoRA fine-tuning to adapt LLMs like Gemma-2B or Phi-2 for specific use cases cost-effectively.

* Dive into Configuration options and multimodal prompting.

* Knowledge of deployment workflows, GPU-accelerated LoRA inference, and ethical AI practices that we should follow.

Supercharging Android Apps with On-Device AI: Gemini Nano & Prompt API

Mobile AI revolution is increasingly moving on-device, driven by demands for privacy, low latency, and offline capability. In this session, I’ll demonstrate how to leverage cutting-edge on-device AI tools including Gemini Nano, Prompt API, ML Kit Gen AI API and MediaPipe LLM Inference APIs to build intelligent Android apps entirely on-device.

Key Takeaways:

* On-Device AI Landscape - Understand the shift from cloud to on-device AI, the privacy benefits, and real-world use cases for features like smart reply, summarisation, image analysis, and breaks down how Android abstracts on-device intelligence through AI Core, LoRA fine-tuning, Private Compute, and hardware accelerators like NPUs..

* Getting Started with Gemini Nano - Walk through integrating Google’s Gemini Nano generative model into a modern Android app, highlighting both ML Kit Gen-AI APIs and Prompt API for custom scenarios.

* Prompt API demo - Through real-world demo of AI-driven activity booking search system built on Gemini Nano using prompt api (optimising using prompt design strategies and best practice from 18s -> 3s ), shows how to design, optimize, and productionise on-device AI features for modern Android apps.

* Production Considerations - Address model size, device compatibility, privacy, and performance optimisation lessons learned from deploying AI features at scale in consumer and enterprise Android apps.

* Beyond Gemini - MediaPipe, LiteRT, and Custom Models: Explore the MediaPipe ecosystem for LLM (large language model) inference on-device, and how to bring your own models using LiteRT/TensorFlow Lite for specialised tasks.

Session presented on Droidcon Uganda 2025 on 10th November.

FOSSASIA summit 2026 Upcoming

March 2026 Bangkok, Thailand

TechMang 2026 Sessionize Event Upcoming

January 2026 Mangaluru, India

Devfest Mumbai 2025 Sessionize Event

December 2025 Mumbai, India

DevFest Bujumbura 2025 Sessionize Event

December 2025 Bujumbura, Burundi

Made for Dev by Global AI User group Sessionize Event

December 2025

Mobile Developers Week Abu Dhabi 2025 Sessionize Event

December 2025 Abu Dhabi, United Arab Emirates

DevFest Dubai 2025

December 2025 Dubai, United Arab Emirates

IndeHub Zoho Apptics android edition 2025

November 2025 Chennai, India

droidcon Uganda 2025 Sessionize Event

November 2025 Kampala, Uganda

Dinoy Raj

Product Engineer – Android @ Strollby | Droidcon Uganda ’25 & Droidcon Abu Dhabi 25 Speaker

Thiruvananthapuram, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top