Speaker

Anhad Singh

Anhad Singh

Founder & CEO at Styrk AI

Actions

Anhad Singh is a cybersecurity & data privacy leader and holds a PhD in Data Privacy from University of California, Davis. Before starting Styrk AI, Anhad led multiple internal and external privacy initiatives at Google as Sr. Data Privacy Engineer. Prior to Google, he was at Dataguise (acquired in 2020) where he took on many different roles, including designing and building software, Product Management and the Head of Technology and Partnerships. Anhad exemplifies thought leadership across cybersecurity & data privacy through various speaking engagements, webinars, publishings and patent contributions.

Area of Expertise

  • Information & Communications Technology
  • Region & Country

Topics

  • AI & Privacy
  • AI and Cybersecurity
  • AI & ML Solutions
  • AI & Machine Learning
  • AI Ethics
  • AI Bias
  • AI Risk
  • AI risk management
  • AI Security
  • artificial intelligence risk
  • Artificial Intelligence and Machine Learning for Cybersecurity
  • InfoSec
  • MLSecOps
  • aisecops
  • MLOps
  • AIOps
  • AI Privacy
  • Responsible AI
  • Trustworthy AI
  • threat modeling
  • cybersecurity
  • LLMSec
  • Threat Intelligence

AI Red Teaming: Harnessing Diverse Expertise for Resilience

AI red teaming is a critical practice for identifying vulnerabilities in machine learning models and applications, but it requires collaboration across a diverse set of experts, including AI engineers, cybersecurity professionals, data scientists, ethical AI researchers, and domain specialists. This session explores the unique perspectives and skills each group brings to the table and highlights the challenges of aligning these diverse teams to secure AI systems effectively. Attendees will learn how to foster cross-disciplinary collaboration, upskill team members in adversarial AI techniques, and leverage tools and frameworks to make AI security accessible to all stakeholders. Real-world examples will demonstrate how diverse expertise in AI red teaming can uncover vulnerabilities and ensure systems are secure, ethical, and resilient.

Ensuring Trust: Security, Privacy, and Compliance in AI Systems

As AI systems become integral to enterprise operations, ensuring their security, privacy, and compliance is paramount. This session will explore the critical aspects of safeguarding AI systems against vulnerabilities, protecting sensitive data, and adhering to regulatory standards. Attendees will gain insights into the latest frameworks and best practices for conducting AI audits, managing data privacy, and preparing for compliance with evolving regulations. Join us to learn how to build trustworthy AI systems that not only drive innovation but also uphold the highest standards of security and ethical responsibility.

In this session, we will delve into the essential components of securing AI systems, focusing on the internal challenges of vulnerability management, data privacy, and regulatory compliance. As AI technologies advance, organizations must prioritize the integrity and trustworthiness of their AI systems. This presentation will provide an overview of the security frameworks and auditing practices necessary to protect AI systems from internal and external threats. Attendees will leave with practical knowledge and tools to enhance the security and compliance of their AI initiatives, ensuring they meet both current and future standards of ethical responsibility and legal compliance.

AI Auditing: Building Trust and Compliance in the Age of Artificial Intelligence

As the adoption of AI systems continues to accelerate, organizations face increasing pressure to ensure these technologies are compliant with regulations and ethical standards.

This session will delve into the critical role of AI audits in navigating the complex landscape of AI deployment. Attendees will gain insights into the definition and benefits of AI audits, explore best practices, and learn how to derive maximum value from these processes.

The session is designed for mid- to senior-level compliance, risk, and legal professionals, including Chief Compliance Officers, Chief Risk Officers, Compliance Specialists, Ethics Officers, Legal Counsel, Auditors, and Consultants. Participants will leave with actionable strategies to implement robust AI audit practices that build trust with stakeholders and the broader community.

Securing AI/ML Applications from Development to Deployment

AI systems face distinct security challenges at different stages of their lifecycle, from build-time (e.g., data collection, training, and validation) to run-time (e.g., deployment and operation). This session explores the evolving threat landscape across these stages, including risks like data poisoning, adversarial attacks, and prompt injection. Attendees will learn how to implement a layered security approach that bridges the gap between build-time and run-time, incorporating practices such as robust model validation, adversarial training, and continuous monitoring. By addressing these challenges holistically, organizations can ensure their AI systems remain secure and resilient throughout their lifecycle.

ODSC West

Startup Showcase: Styrk AI

October 2024 Burlingame, California, United States

AI & Big Data

Navigating the Data & AI Landscape – Ensuring Safety, Security, and Responsibility in Big Data and AI Systems

June 2024 Santa Clara, California, United States

AWS Re:Inforce

GRC351: Protect Customer Privacy with AWS

June 2019 Boston, Massachusetts, United States

AWS Webinars: Data Governance Regulation and Compliance

Protect Customer Privacy While Enriching Data Analytics

October 2018

AWS Webinars: Data Lakes and Analytics for Financial Services

Prepare your Financial Services Organization for GDPR

September 2018

Anhad Singh

Founder & CEO at Styrk AI

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top