Session
Off-Limits Data: Training AI in a World of Contracts, Clauses, and Compliance
As large language models (LLMs) move from pilot to production in healthcare, a new governance challenge is emerging: defining what data can be used for training and what’s contractually or ethically off-limits. Increasingly, healthcare organizations are embedding strict “no LLM training” clauses into their contracts, sometimes even for internal tools. But without clear internal policies, prompt libraries, and data governance structures, teams risk accidental violations or over-compliance that slows innovation.
This session explores how to build an effective AI training data policy:
* What counts as “training” vs. “inference” or “fine-tuning”?
* How to handle customer data with conflicting contract terms
* What internal practices (e.g., model logs, prompt libraries, data tagging) can reduce governance risk?
* How to navigate overly broad contractual restrictions without shutting down your AI roadmap?
Attendees will leave with a practical playbook for managing AI data boundaries inside fast-moving healthtech environments.

Alex Tuck
Managing Principal @ Tuck Consulting Group
Burlington, Vermont, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top