Serg Masis
Lead Data Scientist, Syngenta ● Bestselling Author of ML/AI books
Raleigh, North Carolina, United States
Actions
Serg Masís has been at the confluence of the internet, application development, and analytics for the last two decades. Currently, he's an Agronomic Data Scientist at Syngenta, a leading agribusiness company with a mission to improve global food security. Before that role, he co-founded a search engine startup, incubated by Harvard Innovation Labs, that combined the power of cloud computing and machine learning with principles in decision-making science to expose users to new places and events efficiently. Whether it pertains to leisure activities, plant diseases, or customer lifetime value, Serg is passionate about providing the often-missing link between data and decision-making. He wrote the bestselling book "Interpretable Machine Learning with Python" and is currently working on a new book titled "DIY AI" for Addison-Wesley for a broader audience of curious developers, makers, and hackers.
Area of Expertise
Topics
QA for ML: How we can trust AI with Food Sustainability
This session will explore the role of Quality Assurance (QA) in AI, drawing from its rich history and evolution to underscore its significance in designing systems that are not only efficient but also trustworthy.
In conclusion, this session will offer insights into the critical role of Quality Assurance in building AI systems that are not only technologically advanced but also reliable and trustworthy. Through the lens of agricultural AI, we will explore practical examples of QA at work, paving the way for a future where AI can be embraced with confidence.
Beyond Code: The Essential AI Skillsets for Development Teams
Allow me to first address the elephant in the room.
AI won't take your job (not yet, anyway!).
Ignore THE HYPE. Don't fear coding assistants that do the job of entire software development teams! The technology isn't there yet. As impressive as it may seem, it's too primitive to do that kind of damage. But beyond the hype, there's HOPE:
- We will discuss what makes AI worthwhile so we have HOPE that there are complex problems that even the primitive AI we already have can help us with.
- We will address implementation strategies so there's HOPE that we can work with its strengths and mitigate its weaknesses so that it frees us from mundane tasks and only focus on those tasks that drive innovation.
- And how humans-in-the-loop can be AI's biggest asset. Therefore, there's HOPE that humans and AI working together can improve business outcomes!
It will not end software development; instead, it will transform it. And not in how the media narrative has led us to believe, which is through simply generating code. Beyond code, there's AI-assisted ideation, project management, data analysis, collaboration, and much more. The last portion of this session will be a ROADMAP focussing on many examples of how development teams can leverage the technology. And in particular, how software developers can create custom solutions.
Lastly, the session will offer strategies for teams to upskill in AI, including engaging in online courses and workshops, contributing to open-source AI projects, and keeping abreast of the latest research and tools. This proactive approach is crucial for developers and all team members, ensuring they remain competitive and innovative in an increasingly AI-centric world.
DIY AI: Facial Recognition from Scratch
Facial recognition systems are everywhere. Of course, it's where you would expect it, such as airports, border crossings, and government offices. However, it's also in some public surveillance cameras, all over social media, embedded in smart home solutions, and even in your phone. Have you ever wondered how facial recognition systems work?
In this hands-on session, we will build a facial recognition system from scratch using open-source technologies and publically available pre-trained models. We will create a Javascript web app that uses Tensorflow lite to get facial landmarks (nose, mouth, eyes, chin, etc.) with pre-trained models for the client side. For the server side, we will create a Python API that uses another pre-trained model to generate unique facial descriptors with facial landmarks and compare it against a vector database of faces. The javascript web app connects with the Python API to determine whose face it is. However, to appease any privacy concerns, we will also demonstrate other ways of building a system without relying on server-side components. And during the tutorial, participants can optionally enter their faces into the face database and have the biometrics removed at the end of the tutorial. The goal is that developers should be confidently able to apply what they learned to their projects and learn more in general about facial recognition, vector databases, and machine learning with Javascript.
Lesson Plan
- Lesson 1: The Javascript Web App We will learn how to turn on the camera in a web browser, locate the face, and capture facial landmarks (nose, mouth, eyes, chin, etc.) for every frame.
- Lesson 2: The Python API & Vector Database Here we will create an API that can receive any facial landmarks and leverage another deep learning model to create a facial descriptor that it can store in a vector database or use to compare and find the closest match in the database.
- Lesson 3: Tying it all together From the web app, when a face is found, send a "Find Closest Match" request to the server. Then display the name of the person that is the closest match on the screen.
- Bonus lesson: Local prediction and examples of use cases. With Python code, we will examine how to make all the machine learning predictions on the local device. And other use cases for facial detection and recognition systems.
Learning Objectives
- How to make Facial Detection and Recognition systems, and the theory and use cases behind them
- How to leverage Mediapipe and Dlib open-source models for facial recognition tasks both in Python and Javascript
- How to populate and search Vector Databases, and the theory and potential use cases behind them
- How to use TensorFlow Lite and make a client-side application in Javascript Some programming experience in any language is a prerequisite, and some Python or Javascript experience would be helpful. Instructions to install Python, clone a repository, and create a Python environment will be provided, but it will be easier if they know how to do this and come prepared for it.
Outsmarting AI: Understanding, Preventing, and Defending Against Adversarial Attacks
Artificial Intelligence has revolutionized numerous fields, yet its vulnerability to manipulations poses a significant challenge. Deceptively simple alterations can lead a model to make glaringly incorrect predictions, a phenomenon known as an adversarial attack.
In this session, we will dive deep into the world of adversarial attacks, exploring how they function and why AI systems fall victim to them. We'll scrutinize various forms of attacks, unpacking their methodologies and implications. Understanding these techniques is key to fortifying our AI systems against potential threats.
Having examined the problem, we'll then turn our attention to solutions. We will introduce and explain two robust defense methods.
Finally, we will demonstrate how to evaluate the robustness of AI models against adversarial attacks. By assessing model performance under adversarial conditions, we can gauge the effectiveness of our defense strategies and fine-tune them for improved protection.
By the end of this session, participants will have gained a comprehensive understanding of adversarial attacks, learned effective defense strategies, and been equipped with techniques to evaluate model robustness.
Composing with Code: A Step-by-Step Guide to AI Music Generation
Have you ever been fascinated by the seemingly magical ability of artificial intelligence to generate creative, dynamic music? Have you found yourself curious about the mechanisms behind this intriguing technology? In this comprehensive session, we delve deep into the world of AI-powered music creation, unraveling the mystery of how machines can emulate the creativity usually attributed to human musicians.
In this hands-on tutorial, after a brief introduction to the theory of generative AI for audio, we will introduce you to several cutting-edge, open-source tools and pre-trained models for audio generation. Then, we will demonstrate how to harness these tools' power to generate your unique compositions from scratch.
The code shown is in Python, and we will start with a simple example and build on it each time, adding a little bit of complexity from text-conditional generation to melody-conditional generation to audio-continuation and audio-inpainting. Join us as we demystify the process of AI music creation and turn this cutting-edge technology into an accessible reality!
Lesson Plan
- Lesson 1: With Python, we will learn how to use text-conditional generation to generate some music based on a description (also known as a prompt).
- Lesson 2: What if we have a melody we'd like to use? Then, we can provide an audio clip with a melody and use melody-conditional generation.
- Lesson 3: How about we know how the music starts but want ideas of how to continue? That's when audio-continuation would be helpful to take an existing clip and fill in what comes afterward.
- Bonus lesson: Tying it together, take some whistling and a random prompt, and leverage melody-conditional generation and audio-continuation to make a song.
Learning Objectives
- How to create audio generation systems with AI models (and for music generation in particular), and the theory and use cases behind them
- How to use Huggingface Hub and Pytorch checkpoints to download and load a pre-trained model.
- How to leverage AudioCraft open-source models for music generation tasks.
Adventures in Puppy Training with A.I. and a Raspberry Pi
If you've had a puppy, you know it can be a challenging, very hands-on, endeavor to potty-train them! My puppy learned very quickly, but she would still often go slightly outside the indoor potty pad.
The solution I devised for this problem involved a Raspberry Pi, camera, machine learning model, and speaker. But first I had to train the machine learning model so I pointed a camera at the pad. Then, I labeled hundreds of videos recorded with that camera and used them to train a gesture classification machine learning model to detect when she was about to go.
Then I installed the model on the Raspberry Pi to detect the puppy's gestures and coordinates. Then, based on her position, a Bluetooth speaker would make cheering or scolding sounds. After that, a computer vision method would assess the puppy's accuracy. The feedback produced by the sounds over time helped improve accuracy thus training the puppy.
In this session, I will discuss techniques used, lessons learned, and many cute puppy videos!
Øredev 2024 Sessionize Event
Nebraska.Code() 2024 Sessionize Event
ML Conference Munich
"Interpreting NLP Transformers" (talk)
"Introduction to Explainable AI" (workshop)
Infoshare Conference
"DIY AI: Facial Recognition from scratch" (talk)
"QA for AI systems" (talk)
Data Innovation Summit
"QA for AI systems" (talk)
DeveloperWeek Europe 2023 Sessionize Event
Build Stuff 2022 Lithuania Sessionize Event
Code PaLOUsa 2022 Sessionize Event
WeAreDevelopers World Congress 2022 Sessionize Event
DeveloperWeek 2022 Sessionize Event
ODSC West 2021
"What do Planes and Machine Learning have in common? How Interpretable ML can improve decision-making" (talk)
Great North DevFest Sessionize Event
Ai4 2021 Enterprise
"Interpretable Machine Learning for Model Tuning" (lightning talk)
Machine Learning Prague 2021
"Ensuring Machine Learning Fairness with Monotonic Constraints" (workshop)
Strangeloop 2019
"Assistive Augmentation: Lip Reading with AI" (talk)
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top