Gabriel Agbobli
Research & Teaching Assistant, University of Ghana
Accra, Ghana
Actions
G. Agbobli is a machine learning researcher with a keen interest in data science and analytics. His area of specialization is how statistical methods are applied in the fields of machine learning and artificial intelligence. Gabriel has worked on several data science projects and internships, and he has also served as a Google Developer Student Club Lead, where he assisted students in getting started in the tech industry. Currently, he works as a research and teaching assistant in the Department of Statistics and Actuarial Science at the University of Ghana. In addition, he speaks at events and is always willing to share his experience.
Area of Expertise
Topics
Fine Tuning Gemini Using Google AI Studio
Discover how to tailor the powerful Gemini language model to your specific needs using Google AI Studio. In this hands-on session, we'll delve into the intricacies of fine-tuning, equipping you with the skills to create highly customized AI applications.
Key Topics:
Understanding Fine-Tuning: Learn the basics of fine-tuning and its benefits in enhancing Gemini's performance.
Dataset Preparation: Explore essential techniques for preparing and curating high-quality datasets tailored to your specific tasks.
Fine-Tuning with Google AI Studio: Navigate the Google AI Studio interface and its tools for efficient fine-tuning workflows.
By the end of this session, you will:
Understand the fundamental concepts of fine-tuning Gemini.
Be able to prepare and curate datasets effectively.
Master the use of Google AI Studio for fine-tuning.
Apply fine-tuning techniques to your own projects.
Implement best practices for optimizing fine-tuned models.Join us to unlock the full potential of Gemini and create cutting-edge AI applications!
XAI: Techniques for Interpreting and Understanding Machine Learning Models
Machine learning models are becoming increasingly complex and powerful, but they can also be difficult to understand. Explainable AI (XAI) is a field of research that seeks to develop techniques for making machine learning models more interpretable. This is important for a number of reasons, including:
Enhancing trust and transparency: By understanding how a model makes its predictions, users can be more confident in its decisions.
Debugging and improving models: XAI techniques can be used to identify and fix problems with machine learning models.
Exploring new applications: XAI can help researchers and developers to explore new applications for machine learning, such as in healthcare, finance, and security.
There are a variety of XAI techniques available, each with its own strengths and weaknesses. Some of the most common techniques include:
Local interpretability: This approach explains the prediction for a single data point by examining the model's decision making process at that point.
Global interpretability: This approach explains the overall behavior of the model by looking at how it responds to different types of data.
Counterfactual explanation: This approach explains why the model made a particular prediction by showing how the prediction would have changed if one or more of the input features had been different.
In this talk, we will discuss the different XAI techniques and how they can be used to interpret and understand machine learning models. We will also discuss the challenges of XAI and the future directions of research in this field.
This talk is intended for developers, data scientists, and researchers who are interested in learning more about XAI. No prior knowledge of machine learning is required.
Navigating Anomaly Detection using Statistical Approaches in Data Mining
Anomaly detection is the process of identifying data points that deviate from the norm. This can be useful for identifying fraud, detecting intrusions, and preventing equipment failures. There are many different statistical approaches to anomaly detection, each with its own strengths and weaknesses.
In this talk, we will discuss the following statistical approaches to anomaly detection:
Z-score: The z-score is a simple statistical technique that compares a data point to the mean and standard deviation of the dataset. A data point is considered an anomaly if its z-score is greater than a certain threshold.
Interquartile range (IQR): The IQR is a more robust statistical technique than the z-score. It is not as sensitive to outliers as the z-score, and it can be used to detect anomalies in non-normally distributed data.
Boxplot: A boxplot is a graphical representation of the distribution of data. It can be used to identify outliers and to visualize the spread of data.
Histogram: A histogram is a graphical representation of the frequency of data points in a dataset. It can be used to identify outliers and to visualize the distribution of data.
We will discuss the advantages and disadvantages of each statistical approach, and we will provide some guidelines for choosing the right approach for a particular task.
We will also discuss the challenges of anomaly detection, such as the problem of false positives and the problem of detecting anomalies in streaming data.
This talk will be of interest to anyone who is involved in data mining or machine learning. It will provide you with the knowledge you need to choose the right statistical approach to anomaly detection for your data mining projects.
Foundational Integration: Employing Statistical Concepts in Data Mining Frameworks
Data mining is the process of extracting knowledge from large datasets. It is a powerful tool that can be used to solve a wide variety of problems, from fraud detection to customer segmentation. However, data mining is not a magic bullet. It requires a solid understanding of statistics in order to be effective.
This talk will explore the foundational integration of statistical concepts in data mining frameworks. We will discuss the importance of statistics in data mining, and how statistical concepts can be used to improve the performance of data mining algorithms. We will also cover some of the challenges of integrating statistics with data mining frameworks, and how these challenges can be overcome.
This talk is intended for developers, data scientists, and anyone else interested in learning more about the intersection of statistics and data mining. No prior knowledge of statistics is required.
Feature Engineering in Machine Learning: From Raw Data to Powerful Predictors
The first part of the workshop will cover the basics of feature engineering, including the different types of features, the importance of feature selection, and the different techniques for feature transformation.
The second part of the workshop will focus on the application of feature engineering to real-world data sets. Attendees will learn how to identify the key challenges in feature engineering and how to select the right feature engineering techniques for their data set.
The final part of the workshop will be a hands-on exercise where attendees will apply feature engineering to a real-world data set. This exercise will give attendees the opportunity to practice the skills they have learned throughout the workshop.
This session is an intermediate level session and is ideal for data scientists, machine learning engineers, and anyone interested in learning more about feature engineering.
After the session, attendees will be able to identify the key challenges in feature engineering, select the right feature engineering techniques for their data set, and implement feature engineering in their machine learning projects.
Evaluation Metrics and Model Selection Strategies for Effective Data Mining
Data mining is a powerful tool that can be used to extract insights from large datasets. However, it is important to evaluate the performance of data mining models before deploying them in production. This talk will discuss the importance of evaluation metrics and model selection strategies for effective data mining.
We will start by discussing the different types of evaluation metrics that can be used to measure the performance of data mining models. We will then discuss the different model selection strategies that can be used to choose the best model for a particular dataset. Finally, we will discuss some of the challenges of evaluating data mining models and how to overcome them.
This talk will be of interest to anyone who is involved in data mining or machine learning. It will provide you with the knowledge you need to choose the right evaluation metrics and model selection strategies for your data mining projects.
DevFest Cocody 2023 Sessionize Event
DevFest Dar Es Salaam 2023 Sessionize Event
DevFest Kaduna 2023 Sessionize Event
DevFest Egbe 2023 Sessionize Event
Kigali Devfest 2023 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top