Wafae Hamdaoui
Phd student
Actions
I am a full stack software developer, currently pursuing a PhD at Euromed University in collaboration with University of Valencia. My research focuses on applying AI and deep learning to address real-world challenges in precision agriculture, particularly the early detection and classification of citrus plant diseases. My work involves developing CNN models, constructing datasets, and deploying solutions on mobile and edge platforms for real-time agricultural applications. In addition to my research, I have strong full-stack development skills and enjoy creating efficient, user-friendly web and mobile applications.
Links
Explainable AI
Explainable AI is a branch of AI that seeks to make Machine Learning and Deep Learning models understandable by humans. When developing data-driven models, a trade-off often arises between accuracy and interpretability, improving model’s predictive accuracy typically comes at the cost of its interpretability, and vice versa. This phenomenon, known as the “Interpretability-Accuracy Trade-Off”. Simple models such as linear regression, provide transparent decision-making but may struggle with complex problems, whereas non-linear and highly parameterized models like Artificial Neural Networks generally achieve superior predictive performance but are often too intricate for human understanding. The complexity of such systems renders them opaque, even to AI specialists, earning them the label of “black boxes” . However, these methods are applied in critical fields such as safety, health, food production, where understanding how decisions are made and understanding what is going inside these black boxes is essential. This is exactly where XAI comes into play, making models more understandable and interpretable.
Understanding how models work is important not only for AI developers but also for end users, who seek clear explanations for the decisions made. Thus, the understandability and interpretability of models can be considered an essential aspect of user experience, and a key tool to encourage adoption of AI-based system in real world applications.
To sum up, in this session I will present a range of explainable AI techniques used to analyze and interpret model decisions, making them more understandable.
Reducing the Cost of AI Inference: From Hardware to Software
AI models operate through two primary stages. The first is the training stage, during which the model learns underlying patterns and relationships from the training data by optimizing its parameters. The second is the inference stage, where the trained model generalizes this learned knowledge to interpret and make predictions on new unseen data.
When inference is discussed, it is often associated with high computational cost. Modern AI models, particularly large language models (LLMs), can require millions of dollars to deploy and operate at scale. This high cost arises from several key factors, including the scale of operations, latency and throughput requirements, and model complexity.
In this session, I aim to explain how the cost of AI inference can be reduced at multiple levels of the system stack, namely the hardware, software, and middleware levels.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top