Speaker

V N G Suman Kanukollu

V N G Suman Kanukollu

F5, Distributed Cloud - Automation Engineer

Hyderābād, India

Continuous Learner | AI & ML Enthusiast | Talks about Deep Learning | AWS | Cloud Automation | Docker | Python | REST API | Blogger| Community Speaker |

A software professional and community speaker with 12+ years of rich experience in Automation & Tool development of software applications using Python, Django & Flask. Currently, working as a Sr. Software Engineer at F5, in Cyber Security Domain and working on F5 Distributed Cloud Bot Defense solution.

An AI/ML enthusiast with an interest in Deep Learning and Computer Vision and a promoter of DL framework PyTorch.

Lots of desire to learn new exciting things!

Area of Expertise

  • Information & Communications Technology

Topics

  • Neural Networks
  • Deep Neural Networks
  • Deep Learning
  • Machine Learning and Artificial Intelligence
  • pytorch

How Deep Learning Can Be Used for Malware Detection

Malware is a threat for businesses everywhere. Short for “malicious software”, malware is any intrusive program that exploits system vulnerabilities to wreak havoc on a computing system. We need robust malware detection tools to prevent this from happening to our business.

Even the best malware detection methods can fall short, however, because malware is constantly evolving. New malware variants are developed all the time, and cyber criminals frequently change their tactics and techniques.

This talk covers:
* The threat of malware for businesses and the need for effective malware detection tools
* Why Is it Critical to Detect and Mitigate Malware?
* How can you stay ahead of these threats using Deep Learning?
* How deep learning and artificial intelligence can help you uncover malware threats and thwart them before they can damage your organisation.
* Benefits of Deep Learning Malware Detection

Understanding Convolutional Neural Networks (CNNs)

Convolutional neural networks (CNNs) enable very powerful deep learning based techniques for processing, generating, and sense making of visual information. These are revolutionary techniques in computer vision that impact technologies ranging from e-commerce to self-driving cars.

This session offers an in-depth examination of CNNs, their fundamental processes, their applications, and their role in visualisation and image enhancement.

This session covers concepts, processes, and technologies such as CNN layers and architectures (LeNet, AlexNet, ZF-Net, VGG, GoogleNet & Resnet)
.
I will also explains CNN image classification and segmentation, deep dream and style transfer, super-resolution, and generative adversarial networks (GANs).

Learners who come to this session with a basic knowledge of deep learning principles, some computer vision experience, and exposure to engineering math should gain the ability to implement CNNs and use them to create their own visualisations.

Session summary:
* Discover the connections between CNNs and the biological principles of vision
* Understand the advantages and trade-offs of various CNN architectures
* Survey the history and evolution of CNN's on-going development
* Learn to apply the latest GAN, style transfer, and semantic segmentation techniques
* Explore CNN applications, visualisation, and image enhancement

A visual representation of how Deep Neural Networks Convolve

With the resurgence of Neural Networks in the 2010s, Deep Learning has become essential for Machine Learning practitioners and even many software engineers.

Here I do provide a comprehensive introduction of mathematics for Data Scientists and software engineers with Machine Learning experience.

I will start with Deep Learning basics, explain the concepts, with visual explanation through Python code, and move quickly to the details of important advanced architectures, how these basics used in implementing Deep Neural Networks everything from scratch along the way.

I will demonstrate how Neural Networks work in a principled approach.

This will cover:
1. Neural Networks basics and Forward propagation
2. The importance of Back propagation
3. Optimisation Algorithms
4. Perform train/test splits and examine production case studies
5. Deep Neural network explanation : Conceptually + Visually + Mathematically with the help of python code

Deep Learning for Beginners (From Basics to Neural Networks)

With the resurgence of Neural Networks in the 2010s, Deep Learning has become essential for Machine Learning practitioners and even many software engineers. In this talk, I will start with Deep Learning basics and move quickly to the details of important advanced architectures, as well as how these basics are used in implementing Deep Neural Networks from scratch along the way. I will demonstrate on how Neural Networks work in a principled approach.

This talk will cover:
1. Neural Networks basics and Forward propagation
2. The importance of Back propagation
3. Optimisation Algorithms
4. Perform train/test splits and examine production case studies

This seminar is available on my Youtube: https://www.youtube.com/watch?v=IuNEfmuoEuM&t=369s&ab_channel=Codementor

Neural networks and Deep Learning

Neural networks are at the very core of deep learning. They are versatile, powerful, and scalable, making them ideal to tackle large and highly complex Machine Learning tasks, such as classifying billions of images (e.g., Google Images), powering speech recognition services (e.g., Apple’s Siri), recommending the best videos to watch to hundreds of millions of users every day (e.g., YouTube), or learning to beat the world champion at the game of Go by examining millions of past games and then playing against itself (DeepMind’s AlphaGo).

* This session introduces artificial neural networks, starting with a quick tour of the very first ANN architectures, then covering topics such as training neural nets, recurrent neural networks, and reinforcement learning.
* This session will clarify what neural networks are and why you may want to use them.

An intermediate level, basic understanding of ML is required

Avoiding the Pitfalls of Deep Learning: Solving Model Overfitting with Regularization and Dropout

Understanding how to create a deep learning neural network is an essential component of any data scientist's knowledge base. This talk covers some of the challenges that arise when training neural networks. It focuses on the problem of overfitting and its potential remedy: regularisation. Learners should have a basic understanding of linear algebra and calculus at a minimum level.

* Discover what overfitting means and how to recognise it in deep learning models
* Understand how to sample your data to reduce the likelihood of overfitting
* Learn about regularisation and its use as a remedy for overfitting

Intermediate level.

Assuming audience have a basic knowledge of neural network Concepts
In addition to this I'm assuming that you do have a basic understanding of linear algebra and calculus but we won't be using anything more complicated than a derivative and a vector product :)

I recommend to watch the video : Deep Learning for Beginners (From Basics to Neural Networks) : https://www.youtube.com/watch?v=IuNEfmuoEuM&list=PLqYDykjFMcnGPLPC7cBp9zqgXaRqoUJrC&ab_channel=Codementor

Essential Techniques for Deep Learning to avoid Overfitting

In deep learning, overfitting is a common problem that can hinder the performance and generalization of models. To address this issue, deep learning practitioners use a variety of techniques to regularize their models, including dropout, data augmentation, early stopping, L1 and L2 regularization, and batch normalization.

In this talk, we will explore the essential techniques for avoiding overfitting in deep learning, and discuss their benefits and limitations. Some potential questions to explore during the session include:

1. What is overfitting, and how does it affect the performance and generalization of deep learning models?
2. How does dropout work, and what are some best practices for using it effectively?
3. What are some common data augmentation techniques, and how can they help improve the performance and generalization of models?
4. What is early stopping, and how can it be used to prevent overfitting during training?
5. How do L1 and L2 regularization work, and how do they differ from each other?
6. What is batch normalization, and how can it help prevent overfitting in deep learning models?
7. How do these techniques fit into the broader landscape of deep learning regularization, and what are some emerging trends and challenges in this area?

By the end of this session, participants will have a solid understanding of the essential techniques for avoiding overfitting in deep learning, and will be able to apply these techniques to their own projects and research.

* Technical requirements: Familiarity with deep learning concepts
* Conferences: Suitable for AI, machine learning, and data science events.
* First public delivery: New offering.
* Target audience: Deep learning practitioners and researchers seeking to improve model performance and generalisation. Above Intermediate level.
* Session duration: Can be adapted to different time slots, ranging from brief talks to longer workshops.

IPL Powerplay Score Prediction using AWS Lambda

As a cricket Fan, you want to predict the score of an IPL match, with respect to the Selected Team vs Innings vs selected Players (Batsmen & Bowler) vs Venue ??

Then please go through my blog and predict the score using AWS Lambda.

Medium Blog : https://medium.com/analytics-vidhya/ipl-powerplay-score-prediction-using-aws-lambda-73a580b63ba2
Watch Demo : https://lnkd.in/eCsfPNB7

V N G Suman Kanukollu

F5, Distributed Cloud - Automation Engineer

Hyderābād, India

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top