Session

Fighting ghosts and monsters: How we prevent scary hallucinations in our AI platforms

AI is everywhere and will no doubt be prominent within the talks at Codegarden either discussing its use, how technologists can benefit from it or how it can be used to infer or predict outcomes.

But our predilection to assume that all answers from AI are correct is fraught with danger. Data is fundamental to the ability of an LLM to perform effectively.

There’s a reason why one study started to predict that measuring rulers being present in an image was a sign of a malignant tumour and there’s a reason that ‘Tay’ a Microsoft chatbot learnt discrimination within hours of going public.

That reason, is the data that was used to train them.

In this talk I will discuss how we can protect ourselves against data bias. How the quality, quantity, and representative nature of the data used to train an AI system directly affects its ability to perform and I will highlight some of the examples where this has not been the case and where remarkable (and scary) AI hallucinations have occurred.

Matt Sutherland

Head of Technology, true digital

Bristol, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top