Session
Flight Mode AI: Building Local LLM Apps Easily with LlamaDart
We are used to "high confidence" coding with cloud models like Gemini, but what happens when the WiFi cuts out at 30,000 feet? I started building a writing assistant for myself and quickly realized that true reliability meant going local. But integrating llama.cpp—the gold standard for local inference—usually means entering "dependency hell," managing complex C++ builds across Linux, Windows, Android, and iOS.
In this talk, I will show you how llamadart(https://pub.dev/packages/llamadart) solves this. By leveraging modern Dart 3.10 build hooks and GitHub Actions, I’ve automated the heavy lifting: the library detects your platform and auto-downloads the correct pre-compiled binary at build time.
We will move beyond the low-level architecture and focus on the developer experience available right now. I’ll demonstrate how easy it is to spin up an offline AI:
Zero Configuration: No CMake, no NDKs, no manual compilation.
High-Level APIs: Initialize a model and start chatting with just a few lines of Dart code.
Auto-Templating: Forget manual string formatting; let the library handle the chat templates for you.
Whether you want to build a privacy-first assistant or just ensure your app works in "Flight Mode," you’ll leave this session ready to build your first offline AI app today.
Jhin Lee
Flutter GDE | GDG & Flutter Montreal Organizer | FCAIC | Senior Dev @Unity
Montréal, Canada
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top