Speaker

‪Uri Rosenberg‬‏

‪Uri Rosenberg‬‏

AWS, Specialist Technical Manager of AI Services

Kfar Yona, Israel

Actions

Uri Rosenberg is the Strategic Specialist Technical Manager of AI Services at Amazon Web Services (AWS). Based out of Israel, Uri works to empower strategic customers to design, build and operate deep learning at scale.
Uri is an AWS certified Lead Machine Learning Subject Matter Expert and holds an MsC. in computer science from Tel-Aviv Academic college, where his research focused on large scale deep learning models. He is also a member of the European Commission’s AI Alliance, a forum dedicated to all legal, technical, and economic implications that artificial intelligence (AI) presents to our societies.
Before his current role, Uri was the AI Specialist Technical Manager for Europe, Middle East and Africa. Prior to AWS, He also led the ML projects at at&t innovation center in Israel, working on deep learning models with extreme security and privacy constraints.

Badges

Area of Expertise

  • Health & Medical
  • Information & Communications Technology
  • Physical & Life Sciences

Topics

  • Artificial intellince
  • GenAI
  • MLOps
  • Machine Learning and Artificial Intelligence
  • Developing Artificial Intelligence Technologies
  • AI & ML Architecture
  • Generative AI Use Cases
  • AI Agents
  • AI & ML Solutions
  • AI Ethics
  • AI Research
  • AI Hallucinations
  • computer hardware

Optimizing LLM Performance with Caching Strategies in OpenSearch

As organizations increasingly integrate Large Language Models (LLMs) with OpenSearch, managing computational resources and costs becomes crucial. This session explores how caching techniques can enhance LLM performance within the OpenSearch ecosystem.
We'll dive deep into implementing LLM caching strategies that complement OpenSearch's architecture, focusing on improving query response times and reducing resource consumption. The session will cover various caching approaches including Exact vs Semantic matching, custom implementations, and integration patterns with OpenSearch's existing caching mechanisms.
Through hands-on examples and theoretical foundations, attendees will learn how to effectively implement LLM caching in their OpenSearch deployments to achieve better performance and resource utilization.
This session is ideal for OpenSearch developers and administrators looking to optimize their LLM integrations.

‪Uri Rosenberg‬‏

AWS, Specialist Technical Manager of AI Services

Kfar Yona, Israel

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top