Session
LLMs, Dwarves, and the Future of Analysis: A Field Test
Are we, as data analysts, on borrowed time?
Can years of domain expertise and nuance be replaced by a clever prompt?
Instead of debating, I ran a real experiment: I gave an actual exam (once used at eBay) to two leading LLMs—Gemini 2.5 and Manus AI—with no guidance other than what a candidate would get.
The results were enlightening, odd (and even*) deeply revealing.
This talk walks through the test, the traps, and how LLMs fumbled or succeeded. It’s not a Luddite rant—it’s a field report on what AI can (and can’t) do when context, reasoning, and data intuition really matter.
* got it? odd? even? %2==0? Oh, nevermind
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top