A hitchhiker's guide to embeddings
Embeddings are vectors of floating points representing, in a high-dimensional latent space, the information contained in a text, an image or an audio.
These vectors carry a lot of information about the original data, they serve as crucial tools for machine learning models to make predictions, find similar objects and perform semantic search.
Join me in this talk where we will discover how embeddings can be generated using open-source models and what we can do with them.
Is your model private?
As the popularity of Machine Learning models continues to soar, concerns about the risks associated with black box models have become more prominent. While much attention has been given to the development of unfair models that may discriminate against certain minorities, there exists another concern often overlooked: the privacy risks posed by ML models.
Research has substantiated that ML models are susceptible to various attacks, with one notable example being the Membership Inference attack, enabling the prediction of whether a specific sample was present during training.
Join me in this talk, where I will explain the privacy risks inherent in Machine Learning models. Beyond exploring potential attacks, I will explain how techniques such as Differential Privacy and tools like Opacus (https://github.com/pytorch/opacus) can play crucial roles in training more robust and secure models.
Preferred session duration: 30-45 minutes
Please, stop logging your experiments on a .txt file
Effective debugging Machine Learning models and comparison results demand a robust logging system. In this talk, we will explore the capabilities of Weights & Biases, a platform that offers a solution for tracking results. Learn how this tool can enhance your productivity during ML model development
AI’s Hidden Cost: The Environmental Impact of Machine Learning
The rise of Generative AI has led us to use tools that are slowly changing how we work.
Training the AI models demands significant resources, including vast amounts of water and energy. In this talk, we’ll uncover the hidden costs of AI on the environment and discuss solutions that have emerged in recent years.
Join me in exploring how to make AI more sustainable and discovering how we can reduce its environmental impact while continuing to advance the field.
Session duration: 20 minutes
Learning Together, Distributed: An Introduction to Federated Learning
Federated Learning is a machine learning approach that lets users collaborate to train models without sharing their data. By keeping data on individual devices, FL protects privacy while still enabling the benefits of collective training.
Over the past decade, FL has become a key part of many real-world systems, quietly running behind the scenes on millions of devices. Big companies like Google and Apple have used this technology to deliver smarter and more personalized experiences. Open-source tools like Flower (https://flower.ai) have also made it easier to experiment with FL.
Join me in this talk to discover the basics of Federated Learning, explore its real-world applications, and learn how to create a simple simulation of decentralized training using Flower.
Devfest Pescara 2024 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top