

Anhad Singh
Founder & CEO at Styrk AI
Actions
Anhad Singh is a cybersecurity & data privacy leader and holds a PhD in Data Privacy from University of California, Davis. Before starting Styrk AI, Anhad led multiple internal and external privacy initiatives at Google as Sr. Data Privacy Engineer. Prior to Google, he was at Dataguise (acquired in 2020) where he took on many different roles, including designing and building software, Product Management and the Head of Technology and Partnerships. Anhad exemplifies thought leadership across cybersecurity & data privacy through various speaking engagements, webinars, publishings and patent contributions.
Links
Area of Expertise
Topics
Ensuring Trust: Security, Privacy, and Compliance in AI Systems
As AI systems become integral to enterprise operations, ensuring their security, privacy, and compliance is paramount. This session will explore the critical aspects of safeguarding AI systems against vulnerabilities, protecting sensitive data, and adhering to regulatory standards. Attendees will gain insights into the latest frameworks and best practices for conducting AI audits, managing data privacy, and preparing for compliance with evolving regulations. Join us to learn how to build trustworthy AI systems that not only drive innovation but also uphold the highest standards of security and ethical responsibility.
In this session, we will delve into the essential components of securing AI systems, focusing on the internal challenges of vulnerability management, data privacy, and regulatory compliance. As AI technologies advance, organizations must prioritize the integrity and trustworthiness of their AI systems. This presentation will provide an overview of the security frameworks and auditing practices necessary to protect AI systems from internal and external threats. Attendees will leave with practical knowledge and tools to enhance the security and compliance of their AI initiatives, ensuring they meet both current and future standards of ethical responsibility and legal compliance.
AI Red Teaming: Harnessing Diverse Expertise for Resilience
AI red teaming is a critical practice for identifying vulnerabilities in machine learning models and applications, but it requires collaboration across a diverse set of experts, including AI engineers, cybersecurity professionals, data scientists, ethical AI researchers, and domain specialists. This session explores the unique perspectives and skills each group brings to the table and highlights the challenges of aligning these diverse teams to secure AI systems effectively. Attendees will learn how to foster cross-disciplinary collaboration, upskill team members in adversarial AI techniques, and leverage tools and frameworks to make AI security accessible to all stakeholders. Real-world examples will demonstrate how diverse expertise in AI red teaming can uncover vulnerabilities and ensure systems are secure, ethical, and resilient.
Security by Design: Proactive Strategies for Robust and Resilient AI/ML Models
AI security is often reactive, addressing vulnerabilities only after they are exploited. This session advocates for a proactive approach, demonstrating how security-minded strategies can enhance model robustness and prevent attacks before they occur. Attendees will learn about techniques such as adversarial training, threat modeling, and real-time monitoring to mitigate risks. By contrasting proactive security strategies with traditional reactive approaches, this session highlights the benefits of integrating security into the AI development lifecycle. Organizations adopting a proactive mindset can achieve not only robustness but also greater trust and reliability in their AI systems.
Building an AI Governance Framework: Privacy, Fairness, and Accountability in Practice
As AI systems become integral to decision-making across industries, the need for robust governance frameworks has never been more critical. This session will provide a practical guide to building an AI governance framework, focusing on three foundational pillars: Privacy & Security, Fairness & Explainability, and Ethics & Accountability. Drawing on best practices and insights from leading frameworks, attendees will learn how to design governance structures that address privacy risks, mitigate bias, ensure transparency, and establish accountability. The session will also explore how these pillars intersect with emerging regulations and organizational goals, offering actionable strategies to align AI systems with ethical and regulatory standards while fostering trust and innovation.
Building Trust in AI: Securing GenAI Applications in a Dynamic Landscape
As generative AI becomes an integral part of our digital landscape, understanding and addressing its unique security challenges has never been more critical. This session explores the evolving threats to generative AI systems, from adversarial attacks and model manipulation to data leakage and privacy concerns. Drawing parallels with traditional cybersecurity, we’ll discuss how established practices can be adapted and where entirely new solutions are required.
AI Auditing: Building Trust and Compliance in the Age of Artificial Intelligence
As the adoption of AI systems continues to accelerate, organizations face increasing pressure to ensure these technologies are compliant with regulations and ethical standards.
This session will delve into the critical role of AI audits in navigating the complex landscape of AI deployment. Attendees will gain insights into the definition and benefits of AI audits, explore best practices, and learn how to derive maximum value from these processes.
The session is designed for mid- to senior-level compliance, risk, and legal professionals, including Chief Compliance Officers, Chief Risk Officers, Compliance Specialists, Ethics Officers, Legal Counsel, Auditors, and Consultants. Participants will leave with actionable strategies to implement robust AI audit practices that build trust with stakeholders and the broader community.
Securing AI/ML Applications from Development to Deployment
AI systems face distinct security challenges at different stages of their lifecycle, from build-time (e.g., data collection, training, and validation) to run-time (e.g., deployment and operation). This session explores the evolving threat landscape across these stages, including risks like data poisoning, adversarial attacks, and prompt injection. Attendees will learn how to implement a layered security approach that bridges the gap between build-time and run-time, incorporating practices such as robust model validation, adversarial training, and continuous monitoring. By addressing these challenges holistically, organizations can ensure their AI systems remain secure and resilient throughout their lifecycle.
2025 Palmetto Cyber ConferenceSessionize Event
ODSC West
Startup Showcase: Styrk AI
AI & Big Data
Navigating the Data & AI Landscape – Ensuring Safety, Security, and Responsibility in Big Data and AI Systems
AWS Re:Inforce
GRC351: Protect Customer Privacy with AWS
AWS Webinars: Data Governance Regulation and Compliance
Protect Customer Privacy While Enriching Data Analytics
AWS Webinars: Data Lakes and Analytics for Financial Services
Prepare your Financial Services Organization for GDPR
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top