Session
Navigating the Future: Responsible AI Through Ethics, Governance,CyberSecurity Frameworks & Controls
The adoption and activation of Large Language Models (LLMs) introduces various cyber risks, including biases, data privacy, trust and ethics concerns, and cyber threats throughout the AI model lifecycle. Enhancing AI frameworks with dynamic risk assessment models, robust data security privacy measures, and adaptive learning algorithms can address these challenges. Bias in AI algorithms is a significant ethical consideration in the use of AI in cybersecurity, and it is essential to take steps to identify and mitigate potential biases to ensure fair and responsible use of AI. This includes using diverse and representative training data, technical solutions such as adversarial training and fairness constraints, and governance structures, policies, and procedures.
In this panel, our speakers will talk and discuss on AI principles, AI lifecycle management, model risk management framework, confidence in AI along with NIST, ISO cyber security controls and risk mitigation techniques.

Jay Shah
Cyber Security and DevSecOps Professional
Toronto, Canada
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top