Speaker

Ankita Gupta

Ankita Gupta

Cofounder and CEO of Akto.io

San Francisco, California, United States

Actions

Ankita is the co-founder and CEO of Akto.io. Prior to Akto, she had experience working in VMware, LinkedIn, and JP Morgan. She holds an MBA from Dartmouth College and a Bachelors in Technology from IIT Roorkee. She has 11 years of experience in the field and is extremely passionate about solving application security problems. Her work has been featured in Forbes, Venture Beats, and Dark Reading.

She is an active speaker at Defcon, Bsides conferences, API Summit, API Days and OWASP community events.

Area of Expertise

  • Business & Management
  • Information & Communications Technology

Topics

  • cybersecurity
  • api security
  • Application Security
  • DevSecOps
  • GitHub
  • Developer
  • InfoSec
  • Security
  • web security
  • Security & Compliance
  • api
  • API Testing
  • Venture Capital
  • StartUp
  • Female founders
  • Startup Technologies
  • Data Security
  • Information Security
  • Cloud App Security
  • Cyber Security basics
  • cyber security
  • IT Security
  • cybersecurity awareness
  • GraphQL
  • Apollo GraphQL
  • REST APIs
  • REST
  • Developer Tools
  • llm
  • LLMs
  • LLMSec
  • GenAI
  • generative ai
  • Startups
  • Technology Startups
  • Early Stage Startups
  • Startup Growth
  • AI for Startups

7 Most Critical Security Tests for GraphQL APIs

Popularity of GraphQL is skyrocketing. We have been working to solve GraphQL security for more than two years now and have developed 40+ tests in this category. We will showcase 7 most critical tests. These are written in YAML format.

Purpose: To educate developers and security teams on how to conduct security testing on GraphQL APIs
This will be complemented with real case study and Damn Vulnerable GraphQL Application (DVGA).

1. Overview of GraphQL Security with examples.
2. Introspection Mode Test:
3. Overfetching Test:
4. High Depth Exploiting Recursive Types Test:
5. Excessive Errors Test:
6. Find Objects and Add Keys Test:
7. CSRF Content-Type Test:
8. CSRF Through GET Requests Test
9. Automate these tests in CI/CD

AI in Cybersecurity: 5 Growing Trends in 2024

Outline:
1. Introduction to AI in Cybersecurity
2. Enhanced Threat Detection and Response
- Case Study: IBM's Threat Detection and Response service
- Case Study: Darktrace's machine learning technology
3. The Progression of Adversarial AI
- Case Study: DeepArmor's endpoint protection platform
4. AI-Enhanced Authentication
- Case Study: BioCatch's behavioral biometrics
5. AI-Powered Automated Response
- Case Study: Rapid7's InsightIDR
6. Democratization of AI Security Solutions
- Case Study: Akto's API Discovery and scanning
7. Discussion on adversarial AI tactics and countermeasures
8. Conclusion and Future Predictions

Sources of Information:

1. MarketsandMarkets Report on AI in Cybersecurity
2. Capgemini Research Institute Survey
3. Case studies from IBM, Darktrace, DeepArmor, BioCatch, Rapid7, and Akto
4. Ponemon Institute Study on Data Breaches

Session Format:

This 25-minute session will start with a brief 5-minute introduction to the role of AI in cybersecurity.
I will then dedicate 3 minutes to each of the five key trends, utilizing real-world examples and case studies for a more tangible understanding.
I will discuss the implications each trend has for the cybersecurity landscape.
A concise 2-minute Q&A segment will follow each trend discussion to address immediate questions.
I will conclude with a quick 2-minute summary and a brief discussion on future predictions.

Securing LLM APIs

Discover how to secure LLM APIs. By the end of this session, you will have complete knowledge of Top 10 LLM vulnerabilities by OWASP with real life examples. You will also learn about 5 most critical LLM Security tests and best practices when deploying and managing LLM APIs.

[What] are LLM APIs?
The first part of the presentation will introduce LLM APIs, discussing how developers use and deploy them. With examples of LLM APIs such as OpenAI's ChatGPT and Google's BERT (Bidirectional Encoder Representations from Transformers).

[Why] secure LLM APIs?
The second part of the presentation will educate the audience on the importance of securing LLM APIs and highlight the top 10 vulnerabilities to consider when deploying or using LLM APIs in code. Real-world examples, including CVE-2023-37274 (Path traversal exploitation in Auto-GPT), Samsung's Data Leak via ChatGPT (Accidental information disclosure), and Meta's LLaMa Leak (Unauthorized model access and data dissemination) will be discussed in detail.

[How] can we ensure security of LLM APIs?
The final part of the presentation will focus on teaching best practices and tests to secure LLM APIs. It will cover 5 critical LLM Security Tests:
1. Test 1: Sensitive Data Exposure in LLMs - AWS Keys: It finds out if the LLM can reveal AWS secret keys when provided with a specific prompt input.
2. Test 2: Insecure Output Handling Test on LLMs: RCE with terminal command.
3. Test 3: Prompt Leak Injection Test on LLMs: It employs RegEx pattern matching to detect internal prompt leaks that may lead to unauthorized access.
4. Test 4: Overreliance test on LLMs - Package Hallucination: It aims to assess the behavior of LLMs when users rely on them to provide accurate and appropriate content.
5. Test 5: OBFUSCATION test on LLMs: It aims to assess how the LLM handles obfuscated or encoded input and whether it can potentially trigger vulnerabilities when decoding such input.

Playbook for secure API Authentication and Authorization

Secure API authentication and authorization is at the heart of Application Security. We will explore the landscape of popular vulnerabilities such as brute force attacks, JWT vulnerabilities, inadequate session management, user enumeration, Captcha related vulnerabilities, login page weaknesses, Broken Object Level Authorization (BOLA), and privilege escalation.

This session will focus on practical examples and testing use cases that could have potentially prevented top 5 attacks related to authentication and authorization in the year 2023.

This session is invaluable resource for developers, Application and Product security professionals, and anyone involved in Security by Design.

Open Source API Security for devsecops

We want to present in Arsenal. Akto is an open source API Security product. During the session, we will showcase how to:

1. Automate your API inventory and generate open API spec file
2. We will teach how to write custom test for security testing with live demo of 20+ custom business logic tests.
3. Automate API security testing in CI/CD with GitHub Actions as an example

Ankita Gupta

Cofounder and CEO of Akto.io

San Francisco, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top