Session
Local Development in the AI Era
Most of us like to develop on our local machine as much as possible. It means we’re in control of any dependencies, network issues/latency, configurations and cost. And let’s face it, we love to say that “It works on my machine” don’t we? 🙂.
This way of working has been getting harder as systems become more and more distributed. Now, with the advent of AI-driven development, it has become even more challenging than ever before. Many of us use remote generative AI services to help us with scaffolding, writing and testing code, and with many other tasks.
Wouldn’t it be great if we could still run everything on our local machine? That’s what this session is all about!
In this session we’ll take a look at various solutions to continue to do local, network-optional development:
Run AI models locally using different tools such as Ollama, Podman AI Lab, RamaLama, Docker, etc.
Evaluate different code assistants that can work with these locally running models
Explore how to infuse AI capabilities from local models into our code
Compare models from different vendors; and with different sizes to find the right balance between performance and accuracy
Weigh pros and cons of using local vs remote models
Come to this session to learn more about local development while leveraging AI to optimize our development flow, code and our functionality itself!

Kevin Dubois
Senior Principal Developer Advocate at Red Hat
Brussels, Belgium
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top