Erik Bamberg
Java Expert, Vector Database & Machine Learning Enthusiast, experienced public Speaker
Glasgow, United Kingdom
Actions
Java Expert, System Designer and Machine learning Enthusiast - who loves to talk about elegant software solutions and how we can build better software.
More then 25 years Developer and Expert for the Java ecosystem.
Now he spends his energy, enthusiasm and research time in machine learning and vector databases and the combination of all of it.
Area of Expertise
Topics
What comes after ChatGPT? Vector Databases - the Simple and powerful future of ML?
What comes after ChatGPT? Vector database projects like Weaviate, Pinecone, and Chroma recently got millions of dollars of funding for their projects. But what are vector databases? And why will they be so important in the future?
Let us see how Vector Databases can help you define and run your machine learning business use cases. We will explore some real-world use cases and try to understand the potential of vectors and vector databases.
Not exactly sure what a vector is ? No worries, you will learn everything you need about vectors .
A brief hands-on demonstration just using open source will give you an idea, of how to use the new generation of databases in praxis.
We will also cover how vector databases can work together with chatGPT and helps you to overcome some limitations of chatGPT.
It's all about a good Story. ChatBot development with RASA.
Chatbots are everywhere. No big enterprise can be without a chatbot today.
The customer experience is more important than ever. A good chatbot can make the difference between happy regular customers and losing them to competitors. Your chatbot model is constantly evolving, demanding to keep quality testing in mind over the whole lifecycle.
A good framework is what we need as developers. RASA ticks all the boxes, interactive story development, training, evolutional training, testing, and operation.
Let's deep dive into the world of natural language understanding, stories, and entities. Implement your own Alexa without using Alexa!
After this session, you have learned the basics of natural language understanding and how to use RASA for your development and operation.
Let's tell the story together.
Write once, runs...on GPU? How Java tries to recapture the market.
"Write once, runs anywhere." but not when it comes to GPUs - or other hardware site acceleration. The Java bytecode engine and GPUs are not a perfect pair. No wonder a lot of effort is put into solving this problem.
The new Foreign Function & Memory API in Oracles's Panama Project and the new VectorAPI, 3rd preview and 6th incubator version now, are promising and important changes to the JDK. Above all - it promises a better integration with external libraries than JNI.
But also 3rd party approaches like TornadoVM promise to accelerate Java on multi-core CPUs, GPUs, and FPGAs in an elegant way.
Let us discuss what is the problem in the JVM, for what we can use the Foreign Function & Memory API, and which alternatives are on the market.
Not only machine learning, but also applications like video editing and audio processing benefit from lightning-fast float operations.
LLM for your Java toolbox. LLM integration in Spring.
GenAI is everywhere and now Spring.io introduces the Spring AI module. Inspired by the popular LangChain framework it helps to integrate pre-trained LLM Models seamlessly into your infrastructure. Empowering Java developers to add AI/ML to their toolbox.
We will explain what you can expect from Spring AI, and the limitations, and show you use cases for your business ideas.
Code samples will help Java developers get an idea of how to integrate Spring AI into their projects.
LLM on a budget - finetune our own LLM Model
As more open-source LLM models are released, we want to test out these models for our use cases. The hardware requirements to run and finetune such a large model are often a limiting factor.
We will see how to build prompts for training and different options like distillation, quantization, and low-rank adaption to reduce memory usage during training and prediction.
These are the keys to loading these models on consumer hardware. Even Finetuning free Google Colab-Notebooks is possible.
This session will explain how this works and how you can finetune your LLM.
I/O Extended 2023 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top