Session
Anthropomorphic (human-like) trust, as a prerequisite for AI product adoption
You can also find a short VDO pitch for my proposal here: https://youtu.be/fjUtj3otD2U
Making AI products human-like isn’t optional—it’s essential for adoption. For decades, software taught users to expect full control. But with agentic AI, that model breaks. Users are no longer the sole source of input, context, or action. AI now executes complex tasks autonomously, requiring new forms of trust.
To unlock this potential, AI must feel trustworthy even in the absence of oversight. This means designing agents with consistent temperaments, transparent reasoning, and values that project predictable behavior. Deterministic interfaces fall short here—agentic AI thrives in ambiguity.
Yet many Product Managers hesitate to anthropomorphize AI, fearing blurred responsibility or unrealistic expectations. This session challenges that hesitation, showing that human-like AI is a prerequisite, not a risk.
We’ll explore product strategies, case studies, and design cues that build intuitive, human-like AI - helping users feel confident delegating work to autonomous systems.

Mike Nedelko
Technical Product Leader with an Entrepreneurial Edge
Brisbane, Australia
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top