Session

Unlocking LLMs: From Forgiving Errors to Ethical Considerations in Domain Adaptation

Synthetic data generation has helped mitigate the Bias in machine learning (ML) pipelines and scarce data challenges. Recently, large language models (LLMs) have been readily used for this purpose, but “Inherent Bias,” “Data Privacy & Confidentiality,” “Hallucinations,” & “Stochastic Parrot” nature limit the LLM’s reliability for direct & unsupervised use. Therefore, this talk undertakes a real-world use case to address the hazards in food products, shows intelligent prompting to generate the first-ever expert-annotated hazard dataset & presents the dangers of forgiving errors and cherry-picking in LLMs when dealing with sensitive domains.

Dr. Vivek Kumar

Senior Researcher

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top