
Victor Ashioya
Machine learning researcher, Infospace Meta
Kilifi, Kenya
Actions
Machine learning researcher currently pursuing advanced studies in deep learning for computer vision at WorldQuant University, building on a foundation in applied data science and AI research. Dedicated to exploring AI alignment, "hallucinations", and red teaming methodologies to enhance the reliability and impact of AI systems.
Links
Area of Expertise
Topics
Securing your APIs: Best Practices
API security is there to ensure API requests are authenticated, authorized and validated. APIs are great for interacting with users by facilitating the transfer of data between your system and external third party thus also a great target by hackers as they offer a way to exploit an organization.
We will explore the best practices for securing your APIs, as well as the common security risks to avoid such as injection attacks, Cross-Site Request Forgery (CSRF) and Cross-Site Scripting (XSS). We will also discuss the importance of monitoring and logging in maintaining the security of your APIs. By the end of this session, you will have a solid understanding of how to design and implement secure APIs, and be equipped with the knowledge and tools necessary to protect your organization's data.
Redteaming for Large Language Models
Large language models (LLMs) like GPT-3 have shown remarkable capabilities in generating human-like text. However, deploying LLMs to production presents unique security risks if deployed without proper safeguards. This talk will provide an overview of red teaming techniques to uncover potentially harmful behaviors in LLMs before production deployment. I will cover common weak points, adversarial attacks, and best practices to make LLMs more secure and aligned with human values. The goal is to spark ideas for developers and researchers about proactively identifying problems with LLMs.
Scaling Large Language Models for Natural Language Processing on Vertex AI
We will first discuss what is Vertex AI, its capabilities and the origins; why was it necessitated in the first place. We will then go over the options offered by the platform for model training and deployment covering the basics of AutoML, custom training, Model Garden and generative AI.
We then approach the ML workflow from data ingestion to the MLOps cycle even go ahead and touch on model monitoring.
MakerSuite: Prompting Generative Language Models Made Easy
Prompting is a powerful way to interact with generative language models, but it can be challenging to create prompts that generate the desired output. MakerSuite is a browser-based IDE that makes it easier to prompt generative language models. It provides a variety of features that make it easy to create, iterate on, and preview the results of prompts in real time.
This talk will introduce MakerSuite and discuss how it can be used to make prompting easier. We will cover the following topics:
* What is MakerSuite and how does it work?
* What are the different types of prompts that MakerSuite supports?
* How to use MakerSuite to create, iterate on, and preview prompts
* How to export prompts to Python code or use the PaLM API to generate the desired output in your own application
We will also discuss some examples of how MakerSuite has been used to build real-world applications.
This talk is intended for developers who are interested in learning more about MakerSuite and how it can be used to make prompting easier. No prior experience with generative language models is required.
Android Jetpack Meets Machine Learning
First, let's look at data binding. This allows you to seamlessly load TensorFlow Lite models (and others) and add them to your view hierarchy. Learn how to add models to your project, bind them to views, handle input/output tensors, and more. Next, work together to implement a sample app for image classification using ML model binding.
Next, we'll look at how data binding works, binding UI components in an XML layout directly to a data source in code. Extend data binding to also bind machine learning model inputs and outputs to integrate ML models into your apps. Learn best practices for updating your UI when model data changes.
We also discuss considerations for optimizing and validating the model for use on large-scale devices. Learn how tools like TensorFlow Lite Converter, ML Kit, and AutoML streamline retraining and keep your models up-to-date.
Finally, we discuss our continued investment in machine learning for Android and highlights of recent releases. Discover new components and frameworks to integrate smarter AI into your apps. Get a glimpse into the future of on-device machine learning on Android and the possibilities it brings.
In this session, you'll learn everything you need to know to integrate machine learning models into your own Android apps using Jetpack. Developers of all levels will benefit from an overview of how Jetpack enables the next generation of intelligent apps.
AI's New DIY Era: Build, Deploy, Transform with PaLM2 API + MakerSuite
The new PaLM2 language model API and MakerSuite development tools enable anyone to build powerful AI applications. In this talk, I will give an overview of how to leverage PaLM for natural language generation, semantic search, question answering, and other NLP functions. I will then discuss MakerSuite; what it is and its capabilities from prototyping with generative AI to top prompting mechanisms — features that make developing and deploying PaLM2-based applications faster and easier, not to mention reducing the financial burden on the developers. By walking through, I will demonstrate how PaLM2 and MakerSuite can be used to generate (pun intended) better AI applications. From accessing the PaLM2 API for the first time to publishing finished applications, this talk will teach developers at any skill level how they can harness these open technologies to develop transformative AI today.
No GPU No Problem!
As businesses gather massive amounts of data, running and deploying advanced machine learning models at scale is critical to gaining valuable insights. Baseten is an open source machine learning infrastructure platform that unlocks the power of data by enabling you to build, deploy, and run ML applications with models of any size and complexity.
In this talk, I will give an overview of how Baseten can help businesses unlock the potential of their data using state-of-the-art machine learning techniques. Baseten supports transformer models with billions of parameters as well as small models and provides low-latency APIs to access them. You can train, tune, evaluate, and deploy models of any framework through a single simple interface. Baseten auto-scales infrastructure to maximise performance so you can go from prototype to production instantly.
I will demo how Baseten powers data-driven applications with complex models by enabling you to query, retrain, and optimise large models on demand without the need for GPUs. You will learn how Baseten can help your business gain a competitive advantage by building advanced ML applications that tap into huge datasets inferring from a library of LLMs. An interactive Q&A session will follow, where you can ask questions about how to leverage Baseten for your data and models.
Baseten provides an easy to use yet powerful platform for bringing complex data-fueled models to life. Whether you want to implement transformer models that achieve state-of-the-art results or build custom models tailored to your business needs, Baseten allows you to experiment, deploy, and scale ML models to unlock new possibilities with your data.
Why Just Read Data When You Can Chat With It?
As businesses accumulate massive troves of data, unlocking the insights and value from that data is key to success. Language models are a powerful tool for mining and leveraging data, but developing data-driven applications powered by them has been challenging.
In this talk, I will present LangChain, a framework for quickly building applications based on language models combined with Google Drive as the knowledge base. LangChain enables you to tap into pretrained models like GPT-3, and more to create data-powered apps for your business.
With LangChain, you can build conversational agents that provide personalized support based on customer data, summarization systems that condense volumes of reviews or feedback, question answering bots that provide quick access to institutional knowledge, and more. I will demo building an open-domain question answering application using Langchain and large language models.
LangChain is very unique because of its ability to 'chain' different components together, so you can load, combine, and deploy models in just a few lines of code. You will learn how to leverage LangChain and powerful generative AI models to unlock key insights and value from your data. LangChain has the potential to help businesses everywhere harness the potential of their data and build the data-powered applications of the future.

Victor Ashioya
Machine learning researcher, Infospace Meta
Kilifi, Kenya
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top