Session
Exploring Bias in AI-Driven Coding Tasks
Bias is nothing new. Studies have shown pull requests can be judged differently based on the gender of the submitter, resumes filtered unfairly by race or gender. Authors have historically adopting pen names to avoid discrimination. LLMs have been trained on this biased information.
Can the information your LLM knows about you, or what you tell it, significantly impact its results? Could it affect how quickly you arrive at solutions for coding tasks? You will be shocked at the stark difference: the LLM required twice as many questions to deliver code when I specified a different gender.
In this session, I’ll share what I discovered, whether these biases are problematic or beneficial, and open the floor to a critical discussion on how we can approach fairness in the tools shaping the future of development. If you’re curious about the intersection of AI, bias, and developer productivity, this talk is for you.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top