Speaker

Thiru Dinesh

Thiru Dinesh

Head of AI @ Rootcode

Actions

Thiru is currently the Head of Artificial Intelligence at Rootcode and leads Rootcode's AI division. At Rootcode, he specializes in leading the design and development of cutting-edge AI strategies and solutions for enterprises and governments worldwide. He is also a visiting lecturer at SLIIT for the Master's program conducting lectures on deep learning and AI. His research interests include large language models, conversational AI, computer vision, and generative AI.

In his free time, he tries to break open neural networks or finds peace in the Sri Lankan mountains.

Building the future of multi-modal search engines with Gemini

Search engines have revolutionized our access to information, providing nearly all of humanity's knowledge at our fingertips. As we navigate the age of AI, the challenge now lies in building the next generation of more intuitive search engines within applications that go beyond traditional text-based limitations.

In this talk, we will delve deep into the fundamentals of constructing multi-modal search engines that can process and understand diverse modalities of data. We will focus on how advanced multi-modal models like Gemini and open-source alternatives like Pali-Gemma can be leveraged to create powerful, private search engines tailored for domain-specific applications.

We will explore practical implementations and real-world applications, demonstrating how these technologies can drive the future of search in applications. The session will then connect with insights from Josh Woodward's and Sundar Pichai's Google I/O 2024 keynote speeches on Gemini and the future of search. The session will conclude by illustrating the transformative potential of AI in search for both major platforms like Google and individual developers who want to build applications of the future.

Modern Strategies for Leveraging Large Language Models In The Enterprise Using Vertex AI

With the downstream commercialization of large language models (LLMs), businesses now look forward to using these powerful language models like PaLM on their private enterprise data. But as powerful as these models are in their raw form, they need help understanding data they are not trained on. This talk will cover how these models can be effectively leveraged to create private AI assistants for businesses.

This talk will start by first focusing on the concept of foundation models (like PALM & PALM-2) and their usefulness.

Then we will talk about how although finetuning the model by retraining it on custom data seems like a potential solution, it can be costly and data intensive.

We will then talk about modern zero-shot, few-shot, and prompt-free finetuning strategies that are more effective with fewer data and can be way cheaper and faster to execute.

The talk will then move towards how the Vertex AI platform can be used to finetune and run inference on powerful LLMs like PALM.

Finally, as the leaving message, the talk will highlight the benefits of using LLMs on private data and how Vertex AI can streamline and simplify the entire process of adopting LLMs in the enterprise

Training Neural Networks With Tensorflow

This session would cover the fundamentals of a complete neural network pipeline and how the Tensorflow ecosystem can be used to effectively define and train neural networks using an interactive code walkthrough. The session would also in the end cover the different features of the Tensorflow ecosystem, best practices and resources to get started

GDG DevFest Sri Lanka 2022 Sessionize Event

December 2022 Colombo, Sri Lanka

Thiru Dinesh

Head of AI @ Rootcode

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top