Session
Building Trust in AI: On Safe and Responsible Use of LLMs
Generative AI and in particular large language models (LLMs) have revolutionized many facets of everyday life and are here to stay. While AI and LLMs have demonstrated incredible potential as productivity aids, malicious uses and undesirable effects of such technologies are also on the rise. Given this current landscape, in this talk I will focus on the following broad topics:
1. Overview of the state of generative AI and LLM development.
2. Overview of current use cases and success stories of LLM development and deployment in specific domains
3. Focus on threats and mitigation strategies:
- LLM-generated disinformation
- LLMs when used for social engineering attacks
- Ease of attacking LLMs, types of attacks
- Why LLM safety training fails
- Efficient alternatives to safety training, with a focus on inference-time safety steering.
4. Best practices and takeaways.

Amrita Bhattacharjee
PhD Candidate, specializing in LLM Safety
Phoenix, Arizona, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top