Mete Atamel
Software Engineer and Developer Advocate at Google
London, United Kingdom
Actions
I’m a Software Engineer and a Developer Advocate at Google in London. I build tools, demos, tutorials, and give talks to educate and help developers to be successful on Google Cloud.
Area of Expertise
Topics
Coding in the Agentic Era
AI tools has fundamentally changed the way we code. The landscape is a diverse and often confusing maze of tools–from smart autocompletion and web-based vibe coding to CLI/GitHub agents and fully agentic IDEs.
In this demo-driven session, we’ll explore the AI tools landscape, providing a clear overview and essential best practices to help you get the most out of these tools.
Beyond the Prompt: Evaluating, Testing, and Securing LLM Applications
When you change prompts or modify the Retrieval-Augmented Generation (RAG) pipeline in your LLM applications, how do you know it’s making a difference? You don’t—until you measure. But what should you measure, and how? Similarly, how can you ensure your LLM app is resilient against prompt injections or avoids providing harmful responses? More robust guardrails on inputs and outputs are needed beyond basic safety settings.
In this talk, we’ll explore various evaluation frameworks such as Vertex AI Evaluation, DeepEval, and Promptfoo to assess LLM outputs, understand the types of metrics they offer, and how these metrics are useful. We’ll also dive into testing and security frameworks like LLM Guard to ensure your LLM apps are safe and limited to precisely what you need.
Agent Protocols: MCP, A2A, and ADK in Action
Model Context Protocol (MCP) standardizes how models and agents access external tools and resources. Agent2Agent (A2A) Protocol defines how agents running on diverse frameworks and platforms communicate. Together, MCP and A2A enable collaborative agent ecosystems with seamless interoperability.
In this session, we’ll unpack both protocols, show how they complement each other. We’ll then see demos on how to implement these protocols in agent frameworks such as Agent Development Kit (ADK) and deploy them to the cloud to give you practical knowledge to start building your own interoperable agents.
Avoid common LLM pitfalls
It’s easy to generate content with a Large Language Model (LLM), but the output often suffers from hallucinations (fake content), outdated information (not based on the latest data), reliance on public data only (no private data), and a lack of citations back to original sources. Not ideal for real-world applications. In this talk, we’ll provide a quick overview of the latest advancements in multi-modal LLMs, highlighting their capabilities and limitations. We’ll then explore various techniques to overcome common LLM pitfalls, including Retrieval-Augmented Generation (RAG) to enhance prompts with relevant data, ReACT prompting to guide LLMs in verbalizing their reasoning, Function Calling to grant LLMs access to external APIs, and Grounding to link LLM outputs to verifiable information sources, and more.
Building your AI agent with Agent Development Kit (ADK)
Join this workshop to get hands-on experience with the Agent Development Kit (ADK), the Google’s open-source framework for building flexible and powerful AI agents. You’ll have the opportunity to learn how ADK’s building blocks accelerate the development, testing, and evaluation of your agents. Through guided exercises, you will build a functional multi-agent application, test it locally, and deploy it to Google Cloud with the tools provided by ADK.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top