Artificial intelligence researcher and proud dad
Winner of three AI awards, I’ve been working in AI and machine learning for 25 years, designing and developing AI and computer graphic algorithms.
I’m very passionate about AI, focusing on Audio, Image and Natural Language Processing, and predictive analysis as well.
I received several national and international awards, that recognizes my work and contributions in these areas.
Microsoft MVP for Artificial Intelligence Category, I have the pleasure of being a guest speaker in national and international events.
Area of Expertise
GPT, the state of the art for NLP and NLG. People go crazy in front of the performances. No doubt they impress with their game (well controlled).
Is all that glitters really gold?
An objective analysis of GPT algorithms to discover together how they work under the hood, their strengths, weaknesses, and costs in real life.
Trainig a GPT-3 is supposed to cost more than 10 millions of dollars, the cost of their API is not so cheap and they still struggling with bias.
Is it possible to build an architecture able to perform at the same level, in a "budget" ?
Tools like Microsoft LUIS are dead? (Quick answer: No!)
How a good dataset worth 100 times more than an array of GPUs computing massive data.?
Starting from resources like hugging face we will access pre-trained models and libraries that will speed up our job.
Finally we will talk about the final architecture that merges GPT-x with (several) Microsoft LUIS (Basically a QnA maker) in production able to perform as well as very expensive solutions (sometimes far away better) .
Starting from a set of sentences describing your personality, thoughts, hobbies, and personal info we will create a digital copy of you to interact with.
The combined architecture of a vanilla version of GPT models and specialized DL models will allow you to perform varied tasks in a cost-effective approach
Moreover, we will create a DL model capable of profiling users in real-time, based on the ongoing conversation with our Digital Twins. The Digital Twins are responsible for acquiring key info that will be extracted by conditioning the flow of the conversation in order to allow you to make important marketing decisions.
There will be a quick demonstration showing you how to utilize this solution to profile live conversations between "humans", or even a youtube video.
Microsoft Azure, as well huggingface, will speed up the whole process
In this session we will discover the potential of stable diffusion and the tools available that allows us to create speechless and stunning projects.
Custom training, with our own images is the main focus.
- Full training
- Face swapping
Ethical issues is the final and maybe the most important topic of the session.
From a "simple" finetuning to your own Mixture of Expert model using opensource models.
Nowadays training from scratch an LLM is a so huge effort also for very big company. Starting from pre-trained models to create your own model is no more a way for resourceless companies, but a often a must starting point.
- Quantization and QLora
- Injecting embeddinds model into Lora to manage multiple Lora adapters.
- Mixing models
- Creating your MoE (Mixture of experts) model using several finetuned (Your own) models
Artificial Intelligence and Machine Learning are a must nowdays.
For projects carrying a simple or well-known problem we can find a lot of ready-made solutions, but the game changes when facing with specific custom problems.
The first part of this session is a deep down on techiniques approaches and best practices in configuring ML algorithms but much more, do we really need it always?
The second part will cover vulnerabilities of ML, discovering how easy could be to fool and to hack a neural network by some techniques (Like pixel attack) and their implication in (our) security.
A demostration will focus on a case of e-commerce using cloud ML (Cognitive) services, breaking them then possible solutions and workarounds.
Sempre più progetti oggi necessitano della potenza dell'intelligenza Artificiale ed del Machine Learning.
Per progetti la cui problematica è ben nota si possono trovare soluzioni pronte all'uso, ma la situazione può cambiare radicalmente quando dobbiamo risolvere problematiche specifiche o custom.
Nella prima parte della sessione scopriremo in dettaglio tecniche, approcci e best practices nell'uso e configurazione del ML, ma soprattutto, ne abbiamo sempre realmente bisogno?
Nella seconda parte parleremo di sicurezza e vulnerabilità del Machine Learning scoprendo quanto facile può essere ingannare, hackerare una rete neurale attraverso alcune tecniche (Come il pixel attack) e la loro implicazione nella sicurezza (nostra e dei progetti in azienda).
Verrà fatta una demo su un caso di e-commerce che usa servizi cloud di ML, hackerandoli e quindi possibili soluzioni e workaround.
Creating a photorealistic avatar speaking any sentence starting from a written input text.
Focusing on autoencoders, we will do a journey from the beginning (Of the speaker experience), mistakes and tips learned along the path.
Will be showcased:
- Intro, the timeline from beginning to nowadays
- Is NOT a deepfake
- Audio processing techniques: STFT (Short Term Fourier Transform), MELs and custom solutions
- Deeplearning models and architecture
- The technique, inspired to inpaiting, used to animate the mouth
- Masks and convolution
- Landmarks extraction
- Morphing animation technique based on autoencoders features
- Microsoft Azure Speech services used to support audio and animation processing
- Putting all together
Dopo un'introduzione a Microsoft Azure Machine Learning Studio, si passerà alla teoria degli algoritmi generativi nel deep learning.
Focalizzandoci sugli autoencoders, verrà proposto un percorso dall'inizio (Dal punto di vista dell'esperienza dello speaker), evidenziando errori principali e suggerimenti imparati nel corso del tempo mostrando diversi modelli di deep learning che hanno portato alla defizione del modello attuale.
Artificial intelligence researcher and proud dad