Session
"This search result is perfect... but for whom?" Questioning Alignment in AI Systems
You're building a new feature and face the classic choice: optimize for user goals or business metrics. Easy, right? Just balance them. But then AI enters the picture, and suddenly there's a third party with its own agenda. Who is this feature _actually_ serving?
After a decade building search systems, I thought I understood this dance between user needs and business goals. Two parties, clear tensions, manageable tradeoffs.
Then AI entered the chat, and suddenly it's become a 3D chess game where nobody's quite sure who's winning.
Take Tinder: is the algorithm helping you find love, or keeping you swiping? Or a Spotify sleep music playlist: it succeeded when you _don't_ interact - which breaks every engagement metric we know. And don't get me started on LLMs that became sycophants because we optimized for user satisfaction surveys.
You'll walk away with frameworks to think about user goal alignment that I wish I'd had five years ago: how to spot when systems optimize for the wrong goals, why transparency isn't just nice-to-have, and other questions that may well serve your next product planning session.
Because building features that users stick with turns out to be good business.
Paul-Louis Nech
Machine Learning @Algolia - I tame _machines_ so that they can truly serve **humans**
Paris, France
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top