Session
OCR Magic in Go: Build AI-Powered Text Extraction with Ollama in Minutes
Are you tired of people saying Go doesn't work well with AI? Or that Python and JavaScript are better suited for AI? If you love Go (or you're an aspirant gopher), join my talk and unleash the potential of Go in the AI world!
In this talk, we will focus on how OCR technologies work with Go. To play with OCR (Optical Character Recognition), we'll use the Ollama platform to run large language models locally in Go. Ollama allows us, among other features, to run models both locally and in the cloud.
The models we're going to use are vision models, such as qwen3-vl, granite3.2-vision, llava, etc.
We'll explore the approaches - the generate and the chat - we can use to talk to LLMs.
Understanding these differences is key to building effective AI applications in Go. This hands-on session blends theory with live coding to demystify integrating vision models into Go applications.
Leave with the skills to build real-world OCR tools using Go and open-source vision models.
Think Go can't power AI OCR apps? Think again.
With Ollama, scanning text from images becomes a breeze.
Build a fast, local OCR tool in Go using Ollama-no Python needed.
Join this session to unlock Go's untapped potential in AI-powered OCR.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top