Speaker

Geethika Guruge

Geethika Guruge

Principal Consultant, Mantel Group

Auckland, New Zealand

Actions

Geethika is an AWS Ambassador and a Community Builder, highly experienced Solutions Architect and a Principal Consultant for Mantel Group. When he's not busy helping organisations migrate and adopt cloud native solutions, he is also a very active supporter of local communities and meetups.

Area of Expertise

  • Information & Communications Technology
  • Physical & Life Sciences

Topics

  • aws
  • AWS Architecture
  • AWS DevOps
  • AWS Architect
  • AWS Lambda
  • AWS S3
  • AWS CDK
  • AWS ECS
  • GenAI
  • Generative AI
  • Agentic AI
  • AI Agents
  • Generative AI Use Cases
  • Containers
  • EKS

Serverless anti patterns in AWS

Serverless promises to make cloud development faster, cheaper, and more reliable; however, a badly designed serverless application can mean exactly the opposite of this. Learn about the good, the bad and the ugly of serverless use cases and patterns to ensure your serverless applications are built to last.

Scale to Zero with KEDA: Serverless EKS, or Just Smart Scaling?

Unlock the power of Kubernetes Event-Driven Autoscaling (KEDA) to achieve "scale to zero" on Amazon EKS. In this session, we’ll explore KEDA's architecture, its ability to handle demand spikes through flexible scaling policies, and how it compares to true serverless solutions. We’ll cover when KEDA is the right choice, how it seamlessly integrates with EKS workloads, and how it fits into the broader serverless landscape. Through real-world examples, we’ll share best practices for optimizing costs and leveraging KEDA to bridge the gap between traditional containerized workloads and dynamic, serverless-like scalability. Whether you're looking to improve resource efficiency or enhance your scaling strategies, this session will provide actionable insights to make the most of KEDA on EKS.

Benefits to the Ecosystem:

Kubernetes Event-Driven Autoscaling (KEDA) brings significant advantages to the cloud-native ecosystem by enabling efficient, event-driven scaling on Kubernetes. This session will provide a deeper understanding of how KEDA enhances the Kubernetes ecosystem and benefits organizations adopting modern cloud architectures. By highlighting these benefits, this session aims to showcase how KEDA is a game-changer for organizations looking to embrace dynamic scaling while leveraging their existing Kubernetes investments. Attendees will gain actionable insights into how KEDA can improve their cloud efficiency, reduce costs, and enable smarter scaling strategies.

Effortless Kubernetes: Simplifying Cluster Management with EKS Auto Mode

Managing Kubernetes clusters can often be complex, requiring constant attention to scaling, upgrades, and infrastructure management. In this session, we'll explore how AWS EKS Auto Mode simplifies Kubernetes cluster management, enabling developers to focus on application delivery rather than the underlying infrastructure. Attendees will learn how EKS Auto Mode automatically handles node scaling, upgrades, and capacity management, ensuring a highly available and cost-effective Kubernetes environment. We’ll dive into best practices for setting up EKS Auto Mode, discuss its integration with other AWS services, and share real-world use cases to showcase how it can streamline operations and reduce management overhead. Whether you're new to Kubernetes or a seasoned pro, this session will empower you to deploy and manage your clusters with ease.

Benefits to the Ecosystem:

AWS EKS Auto Mode provides significant benefits to the broader Kubernetes ecosystem by streamlining cluster management and reducing operational complexity. By automatically handling tasks like node scaling, upgrades, and capacity management, EKS Auto Mode frees developers from manual interventions, enabling them to focus on building and deploying applications. This simplified management model allows organizations to adopt Kubernetes at scale with less overhead, improving the speed of application delivery and reducing the risk of misconfigurations.

The reduction in management overhead benefits both small startups and large enterprises. Smaller teams can take full advantage of Kubernetes without needing deep expertise in managing the infrastructure, while larger teams can focus on innovation and scaling, rather than cluster maintenance. By reducing operational friction, EKS Auto Mode fosters a more agile and efficient ecosystem, ultimately driving faster innovation and cost savings across the board.

This session will empower attendees to leverage these benefits, allowing organizations to achieve greater efficiency, reduced time-to-market, and a more seamless Kubernetes experience.

Architecting for Sustainability: Building a Greener Future

Sustainability has become a paramount concern across various disciplines, and the field of Information Technology (IT) is no exception. As we continue to build solutions for the future, we bear a significant responsibility to ensure that these innovations are not just efficient and cutting-edge but also environmentally sustainable, leaving behind a habitable planet for generations to come.

This presentation will delve into the essence of "Architecting for Sustainability" using AWS, what it really means, exploring the essential starting points and the compromises you may have to make

Accelerating Serverless Performance with AWS Lambda SnapStart

In this presentation, I will dive into the AWS Lambda "SnapStart" feature and how it's taking serverless performance to new heights, specifically for Lambdas written in Java.

This session will explore how SnapStart revolutionizes the startup time of Java-based Lambda functions, significantly reducing cold start delays and improving overall execution efficiency. It'll uncover the technical details behind SnapStart, and provide practical insights and best practices for leveraging SnapStart effectively in your own serverless projects.

Furthermore this presentation will offer valuable knowledge and actionable takeaways for optimizing Java-based AWS Lambda functions and discover how it can supercharge your cloud applications and workflows, especially when working with Java.

Applying Retrieval-Augmented Generation (RAG) to Combat Hallucinations in GenAI

In the rapidly evolving field of Generative AI (GenAI), one persistent challenge is the phenomenon of "hallucinations," where models generate plausible-sounding but incorrect or nonsensical information. This presentation delves into the innovative technique of Retrieval-Augmented Generation (RAG) as a solution to this problem. By integrating retrieval mechanisms with generative models, RAG significantly enhances the accuracy and reliability of AI outputs. Attendees will learn about the principles of RAG, its implementation strategies, and practical applications, gaining insights on how to effectively reduce hallucinations in their own GenAI applications.

Geethika Guruge

Principal Consultant, Mantel Group

Auckland, New Zealand

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top