
Jules May
Consultant, 22 Consulting
Dundee, United Kingdom
Actions
Jules is a freelance consultant specialising in safety-critical systems, mathematical software, and compilers and languages. He has been writing, teaching and speaking for 25 years, and conducts frequent lectures and workshops. He is the author of “Extreme Reliability: Programming like your life depends on it”, and is the originator of Problem Space Analysis.
Links
Area of Expertise
Topics
GPU programming: for the toughest jobs on planet Earth
For decades now, most computers (and even most phones) have come with some kind of graphics accelerator, and for most of that time, most of us have used it just for creating game graphics. But GPUs are capable of much more than that: they're general-purpose compute engines that can be used to accomplish a whole range of tasks from optimisation to modelling.
Today, new technologies are going mainstream: computer vision systems, generative AI, large language models, and more can demand enormous compute resources, and GPUs are ideally placed to shoulder that load. Form small graphics processors on PCs, through complex processor engines with a few thousand cores, up to rented grids with tens of thousands of cores, GPUs represent a uniquely powerful technology for today's workloads.
But how do they work? How can you coordinate the efforts of tens of thousands of threads when it's hard enough coordinating a handful on a regular CPU?
In this session, Jules will introduce GPU programming. You'll learn:
- What is the GPU architecture, and why it is different from a regular CPU;
- What kinds of workloads are well-suited to GPU, and what aren't;
- The compiler and virtualisation technologies by which you can access GPUs: CUDA, OpenCL, and OpenGL;
- How to write, debug and validate your own kernels, using examples from graphics, data science, and machine learning.
By the end of this session, you'll be able to leverage the power of GPUs in your daily work.
Web3.0: A complete beginner's guide
How can we trust the companies with whom we do business to keep our data safe and secure? History has conclusively shown: we can’t. The companies who collect our data on an industrial scale: how can we control how they use our data wisely? History has shown, they are entirely uncontrollable.
Since the internet got under way (or at least, the Web2.0 version of the internet), privacy has been a constant, and increasing worry. Those adverts that follow you around your browsing: they do more than sell you rubbish! They control what job adverts you see, they are definitely part of a prospective employer’s due diligence on you, and they might even affect your access to financial services or healthcare. Despite worldwide data protection regulation, it’s almost impossible for you to find or correct this mass of data ‘they’ hold about you.
Despite the obvious and growing problems, there is virtually a complete lack of regulation, there is no possibility of creating enforceable codes of conduct, and there are rising demands by governments to weaken our already broken system still further.
To alleviate these problems, there’s an idea growing: a mixture of technical, social, and operational solutions, designed and developed by grass-roots technologists, that are based on the presumption that your data is yours alone, and it is up to you to authorise specific uses of it. “They” can hold no data about you at all. That movement is called Web3.0.
In this talk, Jules will unpack just how important the privacy problem is today (and why it’s getting worse). He’ll explain what Web3.0 is, and some of the cryptographic principles which underpin it, He’ll mention some of the companies and products that are active in the field, and, hopefully, inspire you to get involved.
Infinitely Elastic, Highly Performant, Relational databases.
Cloud-native computing offers the promise of infinite scalability -as your application makes greater demands on the infrastructure, the infrastructure magically grows to accommodate it. This works for every part of your application, except for the database.
Normally, you have to make a trade-off between scalability, functionality, and performance. On the one hand, you can deploy single-instances of commodity databases such as Postgres or Mysql, which don't scale well (if at all); or you can deploy highly-scalable engines such as Dynamo which scale infinitely, but cost a fortune and don't provide any complex queries at all (and are very tricky to set up even then). Occupying a middle-ground, there are products like Aurora, Hammer, and Citus, which between them explore a number of ways to achieve scalability in a relational model, but in each case you need to apply complex manual tuning, and even then the performance can collapse without warning.
Jules has been consulting on a project which leverages any commodity engine to deliver infinitely-scalable relational data delivering performance which is guaranteed to be at least as good as the original engine. Because it uses the underlying engine's wire protocol it is a drop-in replacement for a manually-deployed instance.
In this talk, he discusses why distributed relational data is difficult, why the current solutions fall short, and the mathematical background behind a new theory of relations which allows highly-performant distributed systems to be built.
The cash value of technical debt - How to scare your boss into doing the right thing
As developers, we all know how damaging technical debt can be: it decreases velocity, reliability, and daily joy. We know that true agile working requires constant refactoring to bring technical debt down. And yet, in the constant drive to develop new features and (if we’re lucky) fix old bugs, our lords and masters urge us ever onwards, faster and faster, feature after feature, until the codebase collapses into a sticky mess.
Technical debt is not merely a matter of programmer-aesthetics, it genuinely goes the heart of what quality development is all about. But non-technical managers don’t get that. To them, code is just code: if it works: it works, and if it doesn’t: there are plenty of hackers looking for work who are more competent than you.
In order to align our bosses’ needs with ours, we need to be able to express what technical debt is in language which is familiar to them. That is the language of finance. In this talk, we will explore how to quantify the cash value of code, how to measure technical debt, and how to find the actual interest rate the business is paying for it. This is how you can beat the bean-counters at their own game, and put the joy back into your code,
Back to the future: Why analog computers are coming back
Before we had digital electronic computers, we had analog electronic computers, and before then, we had mechanical analog computers. These were very different kinds of machine to our current computers: based on wholly different principles, and using completely different kinds of circuits, they were nevertheless genuinely useful general-purpose computing devices, used for everything from tide prediction to flight control.
Because they can operate with blinding speed at incredibly low-power, these analog technologies are making a comeback, in domains like digital radio and machine intelligence. A new generation of chips is making analog processors as easy to use as a graphics processor.
What do these machines do that's so different from digital computers? What are their limits? How do you program them? In this talk, Jules will explain the principles, describe some basic programs, and demonstrate a real analog computer solving some real problems.
If considered harmful, or how to eliminate 95% of your bugs in one easy step (Updated)
In 1968, CACM published a letter from Edsgar Dijkstra, called “The GOTO Statement considered harmful”. In it, he explained why most bugs in a program were caused by gotos, and he appealed for goto to be expunged from programming languages. But Goto has a twin bother, which is responsible for nearly every bug that appears in programs today. In this session, Jules will revisit Dijkstra’s original explanations, and show why If and Goto have the same pathology. He will then go on to explain how to avoid this pathology altogether.
Lean: just the meat
"There is a lot of talk these days about lean development, lean enterprise, lean everything. The way it is conventionally presented, it is a slightly different take on Agile.
But lean isn’t just Agile. It’s both more, and yet much simpler than that. So, what exactly is Lean, where did it come from, and what does it have to do with development?"
How to build a knockout development team
No programmer is an island. Modern programs are created by teams of developers. Everybody knows: you need great teams to build great products – you need to build your teams carefully. But what, exactly, makes a great programming team? Great programming skills? Great interpersonal skills? Working-all-night-becasue-the-boss-has-thrown-a-fit skils? Turns out, it’s none of these. In this session, Jules will reveal that what makes a programming team great is exactly what makes any other team great – and most programming teams don’t have it.
Version Control for Data (New)
"Few of us today would consider developing code without the support of a version-control system. And yet our data - which is the lifeblood of our business - tends to exist only in a “present tense”, with no versioning at all.
What would it mean to version data? What would versioned data look like, and how would it differ from versioned code? Most importantly: what would be the business benefits of routinely version-controlled data? In this talk, Jules presents some of the key learnings from a recent project he led which was set up to answer these questions."
Introduction to Problem Space Analysis
How do you design a large system? The architecture of any system is crucial to its success – get this wrong, an the project may never recover. And yet, we are expected to deliver designs that last 5, 10, sometimes 30 years into an unknowable future.
Problem Space Analysis is a technique that informs and documents system designs by anticipating and defining the variabilities of an evolving, long-lived system. It informs the architectural design so that it can accommodate those changes, and it delivers a change-tolerant pervasive language to unify and coordinate the development effort.
In this session, Jules will introduce the principles of Problem Space Analysis, and will show how those principles can be translated into architectures and thence into working systems, even while the goalposts are moving.
Hello, Quantum World!
How would you like to see an actual, quantum computer, actually working?
Everyone's heard about quantum computers - how they'll be able to solve every computational problem in the blink of an eye, decrypting every coded message, and spilling our secrets across the internet. That’s if they ever get delivered: for all the talk, nobody seems able to construct a working quantum computer. So is the whole idea nothing more than fairy dust?
Actually, quantum computers do exist, and we can use them to run real algorithms. Within a few years, quantum computers are going to be a useful part of the programmer’s armoury, routinely solving problems in optimisation, recognition, machine learning, and simulation that no other technology can handle.
(Optionally) Quantum computers are worth studying now because they’re just about ready for commoditisation and large-scale adoption, in the same way that AI systems are now being commoditised. And the kind of techniques that are used to tame uncertainty in quantum systems can also be used to tame unreliability in networks of conventional computers.
This session explains what a quantum computer is, why it is so different from a conventional computer, and how we design quantum algorithms. Finally, it will show a simple, “Hello, Quantum World” program running on real quantum hardware.
Programming like your life depends on it: A Reliability Masterclass
No matter how much advancement we see in programming tools and hardware technology, software development remains resolutely difficult. The preoccupation of today’s developers is exactly what it was fifty years ago: how can we create software which works reliably, and how we can extend it without breaking it? We just accept that software is inherently flawed, that all software contains bugs like original sin, and we design our processes around that.
But what if it were possible to write software correctly? What if we could create bug-free, maintainable code? And what if it were cheaper, faster and easier to write correct code than to write the buggy variety? What then?
It turns out that it is possible to write perfect code. In fact, perfect code is not that uncommon - we have been entrusting our lives to it for decades. What do they do, these perfect programmers, that the rest of us don’t? What research backs up their practices? Can we all do what they do?
This course is for developers who want to eliminate not just 95% of their bugs, but all of them. What we’ll cover:
Good code
- Why it matters.
- What, exactly, is software quality?
- What does good code look like?
Exceptions
- How exceptions got this way;
- Why exceptions turn a drama into a crisis;
- What exceptions should have been;
- The (only) valid use of an exception;
- Exception quarantine
Classes and Objects
- The myth of resuability
- Getaway classes (better than flat-pack classes)
- Better living through immutability
- Algebraic groups
- Nullary objects
If considered harmful
- The if anti-pattern, and why debugging makes bugs worse
- Decision trees
- Down-converting factories
Closure
- Exception quarantine redux
- Closures beat dependency injection
Isolation: intra-program firewalls
- Fly-by-wire
- Layering
- Publish and be damned
Concurrency
- Down with multi-threading!
- Immutability redux
Testing
- Why we test
- Why testing doesn't find bugs
- How to do automated testing right
- How to do manual testing right
Working with legacy code
- Debug the roots, not the leaves
- String-typing is non-typing
Toolmaking: Getting emergence on your side.
(Naturally, we won't be able to cover all these. But I have material for all these, and I can pick and choose to suit the audience.)
The mirror of Erised: The true value of AI in an over-hyped market
Since GAN appeared on the scene, delivering products such as ChatGPT and Dall-E, there has been a great deal of excitement and hype over what these technologies will become. Will they become our servants, solving all our problems for us, or will they become our masters, keeping us as pets for as long as we are useful to them? Will they disappear into a post-truth black hole of their own making, or will they lie around unused and rusting because they’ve failed to deliver any value at all?
The thesis of this talk is: none of these outcomes is correct. AI technologies really do have value, but not the value we are expecting.
Starting with a historical and technological perspective, Jules explores the state of the current AI landscape and how it came to be this way. He unpacks the reasons behind the previous three “AI winters”, and describes why another, even colder winter is inevitable (where a lot of people are going to lose a lot of money). Finally, he points the way towards a more adult and less hysterical understanding of AI’s promise, and how to invest in these technologies despite the current, misguided mania.
Why software breaks, and how we can fix it
Today, almost everything works because of the software inside it. And yet, software contains bugs. No matter how carefully we write our code, no matter how thoroughly we test it, sooner or later it will break. If our software doesn't perform reliably, neither can anything else.
Why is it so difficult to create reliable programs? In contrast to nearly every other engineering discipline (which routinely use techniques such as self-stability, fail-safety, and feedback to build robust and resilient systems) software amplifies disturbances, and so builds systems which are inherently brittle. It doesn’t matter how thoroughly we error-check results or how carefully we catch exceptions, sooner or later a disturbance will start a crack in the code, which can spread to the whole system. That’s why we have to switch it off and switch it on again.
it doesn't have to be this way. We can write intrinsically stable software which uses the lessons from 5000 years of engineering practice. We can make code that consistently and provably behaves perfectly, even when it is impacted by stressors from outside and defects from within. The people who build the software to which we entrust our lives: this is how they do it. And it turns out, once you know the secret, it costs much less money and takes far less time to build code that works perfectly than it does to wrestle with the buggy variety.
In this talk, Jules explains the fundamental difference between software and other kinds of engineering. He explores some of the anti-patterns that we believe will strengthen our code but which (in fact) make matters worse, and introduces a paradigm for creating code which is robust and reliable even in the presence of errors.
This is the key to flawless software, delivered faster.

Jules May
Consultant, 22 Consulting
Dundee, United Kingdom
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top