Speaker

Matthew Livesey

Matthew Livesey

Never met a computer I didn’t like

Copenhagen, Denmark

Actions

I am an experienced software developer and engineering leader, who loves learning and sharing.

Area of Expertise

  • Finance & Banking
  • Information & Communications Technology
  • Physical & Life Sciences
  • Region & Country

Topics

  • Python
  • Docker
  • LLMs
  • Leadership
  • algorithms
  • data
  • AI
  • DevOps
  • scala
  • Data Engineering
  • aws
  • Cloud
  • Terraform
  • SQL

How can organisations balance digital sovereignty and public cloud adoption

Digital Sovereignty is an increasingly important issue within the EU and in Denmark in particular.

Public cloud providers such as Microsoft Azure offer unparalleled options for building scalable, highly available systems using serverless architectural patterns.

While prominent figures such as Denmark’s digital minister announce plans to move away from “big tech” providers, US cloud providers are developing EU sovereign versions of their platforms to address concerns.

These benefits are balanced against the risks of giving up control over data and applications to the providers, especially in regulated industries like Life Science and Financial Services.

Furthermore, in the age of AI everywhere, access to compute resources, especially GPUs, becomes a Sovereign issue.

Based on my article on this issue, this panel discussion explores the topic in depth, giving the audience insights into the issues beyond the simplistic rhetoric that sometimes characterises the debate.

https://www.linkedin.com/pulse/digital-sovereignty-having-moment-denmark-matt-livesey-b56kf

Personal Software

A common reaction to the ability of AI models to write software is “I can create apps, sell them as products!”.

On second consideration it should be clear that if you can build an app that is valuable to someone just by prompting a model, so can everyone else.

To create something with unique value, some kind of edge is required. It could be niche domain knowledge, a proprietary dataset, an advanced technical component that the model is not (yet) capable of achieving.

Alternatively, when the model causes the cost of software production to drop dramatically, we can stop thinking about the app that brings value to many, and think about the app that brings value just to ourselves- personal software.

This talk discusses some of the advantages of building software for one’s own use, how it avoids some of the complexities of products:

- Single platform (“I have an iPhone”)
- No requirement for monetisation (ads, subscription integration not required)
- No need for scale on the server side
- No need for accessibility and internationalisation beyond your own need

And brings some additional advantages
- No user tracking
- No malware, spyware, etc risk
- No incentive to focus on “user engagement”, addictive dark patterns and so on.

The talk then proceeds to demonstrate an example, using a proposed architecture suitable for personal software

- Progressive Web App deployment (no need for App Store integration)
- Standard web technologies (play to AI Model’s strengths)
- Serverless deployment pattern (unlikely to exceed cloud platform’s free tier on a personal app)

Finally the talk explores the implications for software-as-a-service platforms. Which are safe? Which are at risk? How much better do models need go get, to threaten a typical SaaS business?

MCP Demystified

Model Context Protocol is emerging as the as a defacto standard for integrating tools with LLMs. As with most new technology, especially those related to AI, it is shrouded in hype and confusion. What is MCP exactly, how is it implemented, what can it do and not do?
This talk explains the purpose and goal of MCP. It solves the problem of integrating large language models with other systems in a consistent, interoperable way.

- What was the state of the art for integrating LLMs and tools prior to MCP?
- What were the problems and limitations of those approaches?
- How does MCP resolve those limitations?

The talk then dives deep into the details of how MCP is implemented, by building an MCP server from scratch.
The audience will discover how MCP uses established tech such as JSON RPC and standard IO to define a common integration pattern for building AI solutions. Once these nuts and bolts are laid bare, the demonstration moves on to solve a real-world problem via the server implementation.

Finally, the talk explains the less-used capabilities of MCP beyond tools – for example how the “samples” concept allows tools the initiate communication with the LLM, a reversal of the typical tool pattern.

Outline:
- Why do we want to integrate tools with LLMs?
- Prior state of the art (ChatGPT plugins, Langchain tools) and their limitations
- MCP – what it is
- MCP – how it solves the problems
- Deep dive – What is stdio?
- Deep dive – what is JSON-RPC?
- Deep dive – The steps in the MCP communication protocol
- Real world problem – Implement an MCP server to solve … (problem TBC)
- Beyond tools – what else can MCP do and why sampling matters.

People are unpredictable too! - AI agent patterns from human agent best practices

At a recent conference I attended, a question was raised:
“When will we be able to trust AI agents to take care of tasks such as travel booking fully autonomously? “

Perhaps we already can. Every day, organisations delegate responsibility to agents who are non-deterministic, exploitable, and potentially misaligned - our employees, colleagues and peers.

This talk starts with reviewing how delegating control to human agents can go wrong

- Britta Nielsen embezzling millions from Denmark’s welfare department
- Edward Snowden’s deliberate exfiltration of top secret information
- In the UK, the OBRs accidental early release of a budget review
- The myriad of social engineering scams that people fall victim to every day

When human systems work well, controls exist to limit the risk and impact of these problems. The talk reviews some of the most common controls, and explains with concrete examples how analogous controls can be used to place constraints on AI agents. For example:

- Review by an authority
- Newspaper editors
- Expense approval
- Review by peers
- Software pull requests
- Separation of duties
- IT deployment practices
- Healthcare- Doctor prescribes, pharmacist reviews
- Technological aides
- Email spam filters
- Fraudulent transaction detection

The talk proceeds to discuss accountability, using examples such as Moffat vs Air Canada.

Finally, the talk sums up with a review of what it means to take a risk-based approach: AI agents don’t have to be perfect, they have to pass the risk equation.

Matthew Livesey

Never met a computer I didn’t like

Copenhagen, Denmark

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top