Speaker

Jhin Lee

Jhin Lee

Flutter GDE | GDG & Flutter Montreal Organizer | FCAIC | Senior Dev @Unity

Montréal, Canada

Actions

Jhin Lee is a Senior Full-Stack Developer at Unity and a Google Developer Expert (GDE) in Flutter and Dart. With over 20 years of hands-on experience across embedded systems, backend infrastructure, Android, iOS, and web, Jhin focuses on building robust solutions that push the boundaries of modern architecture. Driven by a passion for open-source tooling and AI integration, Jhin is the author of numerous projects, including llamadart (a package for cross-platform local LLM inference via llama.cpp), dinja, and mcp_dart. Beyond daily development, Jhin is a community leader serving as an organizer for GDG and Flutter Montreal, frequently speaking on seamlessly integrating advanced AI models and the realities of maintaining open-source libraries.

Badges

Area of Expertise

  • Information & Communications Technology

Topics

  • flutter
  • Mobile
  • Dart
  • AI
  • LLM

From Concept to Store: Building 'Relia' with an AI-First Workflow

Building an app solo often feels like juggling a dozen different roles—product manager, designer, engineer, and marketer. But what if you could offload half the workload to AI?

In this session, I will share the end-to-end journey of building and publishing Relia, demonstrating how I integrated AI agents and tools into every stage of the lifecycle. We will move beyond simple code generation and explore how to use AI for effective ideation, market validation, complex problem-solving during development, and navigating the publishing process.

Key Takeaways:

* The AI Lifecycle: How to chain AI tools from the "napkin sketch" phase to the App Store submission.

* Tooling Strategy: A breakdown of the specific AI stack used (and how to use them effectively).

* Lessons Learned: The pitfalls of AI-generated code and how to overcome "hallucinated" roadblocks.

* Speed to Market: Real metrics on how AI accelerated the development timeline.

Flutter Workshop for Beginners

Join us for an immersive workshop for computer science and software engineering students and newbies! Dive deep into the world of application development with Flutter, Dart, and Firebase. Whether you're a newbie or just looking to sharpen your skills, this workshop is designed for you.
We will first start from the Flutter basics and guide you to build a chat application using Flutter and Firebase.

Gemma 3 in Your Pocket: Offline AI with Flutter

Want to add offline AI to your Flutter app? This session provides a rapid, practical guide to integrating Gemma 3. We'll skip the deep AI theory and focus on the essential steps: setting up your environment, deploying a basic Gemma 3 model, and running your first offline inference in Flutter. You'll leave with a working example and the knowledge to start building your own offline AI features.

Leveraging Go's Power in Your Flutter App: A Guide to FFI Integration

Flutter is fantastic for building beautiful and performant UIs. But what if you need to integrate functionalities that leverage Go's strengths? This talk dives into using Foreign Function Interface (FFI) to seamlessly integrate Go libraries within your Flutter application. We'll explore setting up the development environment, creating Go libraries, and using ffigen to generate Dart bindings. By the end, you'll be equipped to leverage Go's capabilities for specific tasks within your Flutter projects.

Flight Mode AI: Building Local LLM Apps Easily with LlamaDart

We are used to "high confidence" coding with cloud models like Gemini, but what happens when the WiFi cuts out at 30,000 feet? I started building a writing assistant for myself and quickly realized that true reliability meant going local. But integrating llama.cpp—the gold standard for local inference—usually means entering "dependency hell," managing complex C++ builds across Linux, Windows, Android, and iOS.

In this talk, I will show you how llamadart(https://pub.dev/packages/llamadart) solves this. By leveraging modern Dart 3.10 build hooks and GitHub Actions, I’ve automated the heavy lifting: the library detects your platform and auto-downloads the correct pre-compiled binary at build time.

We will move beyond the low-level architecture and focus on the developer experience available right now. I’ll demonstrate how easy it is to spin up an offline AI:

Zero Configuration: No CMake, no NDKs, no manual compilation.

High-Level APIs: Initialize a model and start chatting with just a few lines of Dart code.

Auto-Templating: Forget manual string formatting; let the library handle the chat templates for you.

Whether you want to build a privacy-first assistant or just ensure your app works in "Flight Mode," you’ll leave this session ready to build your first offline AI app today.

Quick Start MCP with Dart: Building LLM Context Servers Now

The Model Context Protocol (MCP) is revolutionizing how applications interact with Large Language Models (LLMs) by providing a standardized way to exchange contextual information. This talk will begin with a brief overview of MCP and its diverse use cases with LLMs. The core focus, however, will be a practical demonstration of building a robust MCP server using Dart. We'll explore the key components, implementation strategies, and best practices for leveraging Dart's capabilities to create efficient and scalable MCP servers for LLM applications. Whether you're a Dart enthusiast or an LLM practitioner, this session will equip you with the knowledge to integrate MCP into your projects.

Jhin Lee

Flutter GDE | GDG & Flutter Montreal Organizer | FCAIC | Senior Dev @Unity

Montréal, Canada

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top