Session

In the Driver's Seat: An Intentional Approach to AI Governance

Love it or hate it, AI has largely proved that it is here to stay. While the benefits to individuals and companies varies wildly, it is a safe assumption that it would be imprudent to completely abstain from using AI or recklessly embrace a tool/service because it includes AI functionality.

Operating under the premise that most organizations should consider at least some AI-powered tooling, what considerations should be included to find a healthy risk/reward balance? While the details will be unique to each org, having a carefully designed AI Governance strategy in place with appropriate oversight mechanisms are a must for all.

For example, what extent will GenAI tools like ChatGPT be allowed? Will you have managed accounts that prevent data sharing for model training, or will you rely solely on trust and training to mitigate potential data exfil? In another scenario, will you take a scattershot-approach to integrate a variety of LLMs into your product in a bid for a competitive product advantage, or is a guiding strategy designed to intentionally bolster software in a meaningful way?

In this session, we will discuss and share examples of:

- Specific elements of an effective, maintainable AI Governance strategy,
- Resources to guide in the design and implementation of an AI Governance program, and
- Considerations for extending governance, such as vendor risk management, customer communications, etc.

Chris Honda

Manager; Security, Risk, & Compliance @ Plotly

Lehi, Utah, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top