Speaker

Henry Ruiz

Henry Ruiz

Research Scientist at Texas A&M AgriLife Research, GDE in ML

College Station, Texas, United States

Actions

I'm a Research Scientist at Texas A&M AgriLife. My expertise and research interests lie in applying Geophysics tools like Ground-Penetrating Radar (GPR), mathematical and electromagnetism simulations, signal processing, Artificial Intelligence, Machine Learning, and Deep Learning. My current research is focused on designing, developing, and implementing end-to-end software solutions and computational algorithms to analyze remote sensing datasets to tackle agricultural challenges ( using state-of-the-art data science, signal processing, and machine learning methods).
Beyond my job, I am an open-source advocate deeply committed to community engagement and knowledge sharing. Leveraging on Google's recognition as a "Google Developer Expert" in Machine Learning and a "Google Cloud Champion," I do my best to actively contribute to the community by creating and publishing content around Machine Learning, TensorFlow, and related Google Cloud Platform(GCP) services, such as Vertex AI, speaking at developer conferences and meetups, and mentoring students and startups seeking guidance in the space of Machine Learning.

Topics

  • Machine Learning & AI
  • Computer Vision
  • Deep Learning
  • Machine Learning
  • Data Science

Tensorflow Everywhere ( Workshop )

Model deployment is perhaps the most important step in the ML cycle. We have spent a lot of time and effort playing around with different algorithms, training, and tuning our model parameters, so after evaluating its performance and obtaining that long-awaited score, it is time to release it and show our model to the world. It sounds like graduation time!!, right?. However, the statistics show that 60% of the models never make it out into production, mainly because moving the model into a production environment is not simple and requires extra skills.

In this workshop, we will explore different deployment scenarios to release our ML models and learn how easy it is to move them to production using GCP(Google Cloud Platform).

End-to-End computer vision projects

In this talk, I'll cover how was the development process of Tumaini, a mobile application that uses artificial intelligence (AI) to detect pests and diseases affecting banana. I'll be discussing the architecture of the app, how the dataset was created, and the model deployment on the device. https://doi.org/10.1186/s13007-019-0475-z

Modern Deep Learning Workshop: From transformers to LLMs

This beginner-friendly workshop provides an introduction to the fundamentals of generative AI, including topics such as Transformers, GANs (Generative Adversarial Networks), Diffusion Models, Reinforcement Learning from Human Feedback, and Large Language Model (for short LLMs).
Hands-on demos and guided tutorials will allow you to get on top of these #hottopics in tech, allowing you to leverage your (tech) career. Familiarity with Python knowledge is recommended but not mandatory.

Let's embark on the Deep Learning journey together!

Automating your ML pipelines using Kubeflow and Vertex AI

This workshop will delve into the world of automating machine learning workflows using Kubeflow and Vertex AI. Using these powerful tools, participants will learn how to streamline their ML pipelines, from data preparation to model deployment. By the end of the session, attendees will have a solid understanding of how to leverage Kubeflow and Vertex AI to enhance their ML development process and increase productivity. Kubeflow is an open-source platform that simplifies the deployment of machine learning workflows on Kubernetes, while Vertex AI is Google Cloud's unified ML platform. Together, these technologies enable data scientists and ML engineers to build, deploy, and manage ML models at scale with greater efficiency and reproducibility.

Multimodality with Gemini: Unleashing the Power of Text, Videos, Images and more

Gemini is the most capable and general model Google has ever built. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across, and combine different types of information, including text, code, images, and video. This talk dives into the exciting world of Gemini, a cutting-edge foundation model developed by Google. Discover how Gemini seamlessly integrates text and image processing, enabling you to:

- Analyze and understand the content of images, videos, and audio files
- Perform cross-modal tasks like image captioning and visual question-answering
- Explore the potential of multimodality for various applications, from creative content generation to advanced information retrieval.

Additionally, we'll delve into the core techniques that make LLMs multimodal, including contrastive learning and LIMoE—Learning Multiple Modalities with One Sparse Mixture-of-Experts Model. Learn more here: https://research.google/blog/limoe-learning-multiple-modalities-with-one-sparse-mixture-of-experts-model/

Join us to unlock the power of Gemini and push the boundaries of AI!

Unleash Generative AI Power in our Hands-on Workshop!

This beginner-friendly workshop will introduce the fundamentals of generative AI and cover some advanced topics, including Transformers, GANs (Generative Adversarial Networks), Diffusion Models, Reinforcement Learning from Human Feedback, and large language models (for short LLMs). Hands-on demos on Gemini and langChain APIs will be shared to help attendees better understand and stay on top of these hot topics in ML.

1. Generative AI foundations: from transformers to LLMs
Google generative AI APIs
2. Introduction to Gemini API
3. Multimodality with Gemini: Unleashing the Power of Text, Audio, Videos, Images, and More
4. Multi-agents applications using Vertex AI reasoning engine and agents builder

LLM Applications components and design patterns

This workshop will focus on the design patterns and essential components necessary for developing applications using large language models (LLMs). It will also cover best practices for integrating LLMs into our applications, highlighting the importance of the context window, modular design, scalability, and maintenance. Participants will acquire practical knowledge on developing LLM applications, including chat applications, retrieval-augmented generation (RAG) systems, and agent-based tools.

Workshop: Developing a Multimodal Chat that can generate images using Gemini and Imagen

This workshop will explore the exciting intersection of multimodal AI and image generation, focusing on two powerful models: Google's Gemini and Imagen. Participants will learn how to leverage these cutting-edge technologies to create a chat interface capable of understanding and generating text and images. By the end of the session, attendees will have hands-on experience integrating these models into a functional multimodal chat application.

Henry Ruiz

Research Scientist at Texas A&M AgriLife Research, GDE in ML

College Station, Texas, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top