Most Active Speaker

Mete Atamel

Mete Atamel

Software Engineer and Developer Advocate at Google

London, United Kingdom

Actions

I’m a Software Engineer and a Developer Advocate at Google in London. I build tools, demos, tutorials, and give talks to educate and help developers to be successful on Google Cloud.

Awards

  • Most Active Speaker 2023
  • Most Active Speaker 2022

Area of Expertise

  • Information & Communications Technology

Topics

  • Cloud Computing
  • Cloud & Infrastructure

Using ReACT + RAG to augment your LLM-based applications

Large Language Models (LLMs) have some limitations – such as being unable to answer questions about data the model wasn’t trained on or hallucinating fake or misleading information. RAG is the concept of retrieving some data to augment your prompt to the LLM, allowing it to generate more accurate responses and reduce hallucinations. ReACT (Reason and Act) is a prompting technique to guide LLM to verbally express their reasoning and adapt its plan based on data from external sources. In this talk, we'll learn about RAG and ReACT and how using ReACT + RAG together can help to extend and improve accuracy of your LLM-based applications through a sample app.

Using Gemini from C#

Back in December, Google announced Gemini, its most capable and general AI model so far. Gemini has 2 flavors: Gemini Pro, a fine-tuned model to handle natural language tasks and Gemini Pro Vision, a multimodal model that supports images and videos prompts. In this talk, we'll learn about these models and how to use them from C#/.NET.

Multi-modal LLMs, Introduction and Avoiding Common Pitfalls

Multi-modal large language models (LLMs) can understand text, images, or videos and with their ever increasing context size, they open up interesting use cases for application developers. At the same time, LLMs often suffer from hallucinations (fake content), outdated information (not based on the latest data), reliance on public data only (no private data), and a lack of citations back to original sources. In this talk, we’ll first take a tour of Gemini, Google’s multi-modal LLM, show what’s possible, and how to integrate it with your applications. We’ll then explore various techniques to overcome common LLM pitfalls, including Retrieval-Augmented Generation (RAG) to enhance prompts with relevant data, ReACT prompting to guide LLMs in verbalizing their reasoning, Function Calling to grant LLMs access to external APIs, and Grounding to link LLM outputs to verifiable information sources, and more.

Hands-on LLM with Java

In this hands-on lab, you'll learn to use Large Language Models (LLMs) directly from Java. You will first start by familiarizing yourself with Gemini, Google's new multi-model LLM. Then, you will use Gemini in different use cases such as extracting data from unstructured text, document classification, searching in your own documents, and how to supplement the model by integrating calls to external APIs. The lab will use Java and Vertex AI Java libraries along with LangChain4j. Come with your laptop.

GenAI for Java developers

Large language models (LLMs) offer immense potential, but their typical Python-centric nature presents a challenge for Java developers. In this talk, we'll demonstrate how to use LLMs from Java. We'll use Gemini, Google's multi-modal AI model, to perform tasks like text generation, image description, and more, all within a Java environment using Vertex AI libraries and LangChain4J

Open Gemma models in the cloud

In this session, we'll start by exploring the capabilities of some open models such as Gemma 2 for general GenAI, PaliGemma for vision, CodeGemma for code and more. Then we'll see how to use them locally and also deploy them to the cloud for your applications.

Multi-modal LLMs for application developers

Multi-modal large language models (LLMs) can understand text, images, or videos and with their ever increasing context size, they open up interesting use cases for application developers. In this talk, we’ll take a tour of Gemini, Google’s multi-modal LLM, and its open source version Gemma, showing what’s possible and how to integrate them in your applications. We’ll also explore techniques such as RAG, function calling, grounding, to supply LLMs with more up-to-date and relevant data and minimize hallucinations.

Lessons Learned Building a GenAI Powered App

Everyone's excited about AI and justifiably so, but can it help us build better apps? This session will focus on a case study: a GenAI powered interactive trivia quiz app running in the Cloud. We'll explore the challenges we faced while building the app and how GenAI proved to be a game changer. Join me for a fun and educational session featuring a live demo with audience participation, and some valuable lessons learned.

Improve Your Development Workflow with an AI assistant

In this hands-on session, you’ll learn how an AI assistant can speed up your development workflow. More specifically, you’ll see how Google’s Gemini powered AI assistant can help you to design, code, test, deploy, and help with operating your application all within your IDE. You’ll also learn how to have the right expectations and leverage best practices to keep frustration at bay while using AI assistant.

Avoid common LLM pitfalls

It’s easy to generate content with a Large Language Model (LLM), but the output often suffers from hallucinations (fake content), outdated information (not based on the latest data), reliance on public data only (no private data), and a lack of citations back to original sources. Not ideal for real-world applications. In this talk, we’ll provide a quick overview of the latest advancements in multi-modal LLMs, highlighting their capabilities and limitations. We’ll then explore various techniques to overcome common LLM pitfalls, including Retrieval-Augmented Generation (RAG) to enhance prompts with relevant data, ReACT prompting to guide LLMs in verbalizing their reasoning, Function Calling to grant LLMs access to external APIs, and Grounding to link LLM outputs to verifiable information sources, and more.

Mete Atamel

Software Engineer and Developer Advocate at Google

London, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top