Session
Built-In AI in the Browser: Practical Capabilities for Modern Web Apps
AI features are becoming common in web applications, but most implementations still rely on server-side models and external services. Recently, browsers have started experimenting with built-in AI capabilities that run directly on the user’s device, introducing a new option for developers building AI-powered user experiences.
In this talk, we’ll explore what built-in AI in the browser looks like today through concrete examples. We’ll focus primarily on Chrome’s built-in AI APIs, powered by an on-device language model called Gemini Nano that is downloaded on demand, and briefly touch on similar experiments in other browsers, such as Microsoft Edge’s work with on-device models like Phi Mini-4.
These browser-provided APIs enable practical tasks such as text summarization, writing and rewriting assistance, language detection, translation, and multimodal prompts that work with text, images, and audio. We’ll look at how these capabilities are already being used in real products, for example, summarizing reviews, assisting content creation, translating multilingual input, and generating image descriptions.
This session focuses on understanding the tools themselves: what problems they’re good at, what trade-offs come with running AI on-device, and how developers can evaluate built-in browser AI alongside other approaches they may already be using.
By the end of the talk, you’ll have a clear, practical understanding of what built-in AI in the browser can do today and how to reason about using it in your own applications.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top