Session
Future of Mobile AI. What On-Device Intelligence Means for App Developers
Two years ago, adding AI to your app meant only cloud APIs — sending data to servers, paying per request, hoping for good internet. That world is ending.
Today, you can run SLM directly on a phone. No internet, no per-request costs, data never leaves the device. This is production-ready technology.
I built flutter_gemma, an open-source plugin for running AI models locally on iOS, Android, and Web. I'll share what I've learned—not marketing, but real trade-offs and opportunities.
What's possible now — Running Gemma 3 on smartphones, the hardware that enables it, formats that matter (.task, .litertlm). What changes for developers — Offline-first AI, hybrid cloud/edge patterns, model size decisions, fine-tuning and optimization skills.
Where we're heading — Multimodal on-device models, function calling, on-device fine-tuning, edge-specific models like Gemma 3n.
The future isn't replacing cloud — it's a new option: private, fast, and works anywhere.
Sasha Denisov
EPAM, Chief Software Engineer, AI, Flutter, Dart and Firebase GDE
Berlin, Germany
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top