Session

People are unpredictable too! - AI agent patterns from human agent best practices

At a recent conference I attended, a question was raised:
“When will we be able to trust AI agents to take care of tasks such as travel booking fully autonomously? “

Perhaps we already can. Every day, organisations delegate responsibility to agents who are non-deterministic, exploitable, and potentially misaligned - our employees, colleagues and peers.

This talk starts with reviewing how delegating control to human agents can go wrong

- Britta Nielsen embezzling millions from Denmark’s welfare department
- Edward Snowden’s deliberate exfiltration of top secret information
- In the UK, the OBRs accidental early release of a budget review
- The myriad of social engineering scams that people fall victim to every day

When human systems work well, controls exist to limit the risk and impact of these problems. The talk reviews some of the most common controls, and explains with concrete examples how analogous controls can be used to place constraints on AI agents. For example:

- Review by an authority
- Newspaper editors
- Expense approval
- Review by peers
- Software pull requests
- Separation of duties
- IT deployment practices
- Healthcare- Doctor prescribes, pharmacist reviews
- Technological aides
- Email spam filters
- Fraudulent transaction detection

The talk proceeds to discuss accountability, using examples such as Moffat vs Air Canada.

Finally, the talk sums up with a review of what it means to take a risk-based approach: AI agents don’t have to be perfect, they have to pass the risk equation.

Matthew Livesey

Never met a computer I didn’t like

Copenhagen, Denmark

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top