Session

The problem of AI

Shouldn’t AI help us to solve the gap between genders and minorities? There have been many cases where AI has shown bias amongst people with different backgrounds, or preference of male over female. Some of the examples include Amazon’s own sexist hiring algorithm and racism in the American healthcare system. High bias can stem from multiple things including poor data results, preparation or collection techniques. Training the AI algorithm with data that contains indirect prejudices can result in an algorithm with high bias which leads to inherently biased results. Given the ever-increasing reliance on AI, including the recent success of the likes of ChatGPT and Microsoft’s improved Bing search engine, it is crucial we know how to integrate tech into our lives that does not perpetuate the same bias or discriminatory tendencies humans do. In this talk, we will challenge traditional AI practices by looking at how to build an unbiased and trustworthy AI with what is currently available in the tech space.

Ayesha Bhatti

Junior Associate Software Developer, Publicis Sapient

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top